Introduction
CLIAI is a completely free and open-source command-line AI assistant that helps you with terminal commands, system administration, and general questions.
Key Features
- ๐ Privacy-First: Local AI processing with Ollama - your data never leaves your machine
- ๐ Bring Your Own Key: Use your own OpenAI, Anthropic, or other LLM API keys
- ๐ Completely Free: No subscriptions, no hidden costs - 100% open source
- ๐ก๏ธ Safety-Focused: Multi-level command validation and safety checks
- โก Fast & Reliable: Built-in performance monitoring and circuit breakers
Why CLIAI?
CLIAI bridges the gap between natural language and command-line operations. Instead of searching documentation or Stack Overflow, simply ask CLIAI what you want to do, and it will generate the appropriate command with explanations.
Supported Platforms
- Linux (x86_64, ARM64)
- macOS (Intel, Apple Silicon)
- Windows (x86_64)
Installation
Quick Install
One-Line Install (Linux/macOS)
curl -fsSL https://raw.githubusercontent.com/bytestrix/cliai/main/install.sh | bash
Cargo (All Platforms)
cargo install cliai
Package Managers
Arch Linux (AUR)
yay -S cliai
Ubuntu/Debian
wget https://github.com/bytestrix/cliai/releases/latest/download/cliai.deb
sudo dpkg -i cliai.deb
macOS (Homebrew)
brew tap bytestrix/tap
brew install cliai
Windows (Chocolatey)
choco install cliai
From Source
git clone https://github.com/bytestrix/cliai.git
cd cliai
cargo build --release
sudo cp target/release/cliai /usr/local/bin/
Prerequisites
For local AI processing (recommended):
- Ollama - Install Ollama
For cloud AI:
- OpenAI API key, or
- Anthropic API key
Quick Start
Setup
Option 1: Local AI (Ollama)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull mistral
# Configure CLIAI
cliai select mistral
Option 2: Cloud AI
# Set your API key
cliai set-key openai sk-your-key-here
# Or use Anthropic
cliai set-key anthropic your-key-here
# Select model
cliai select gpt-4
First Commands
# Ask for help
cliai "how do I list all files including hidden ones?"
# System administration
cliai "check disk usage"
# File operations
cliai "find all Python files modified in the last week"
# Git operations
cliai "show me the last 5 commits"
Custom Prefix
Set a custom command prefix:
cliai set-prefix ai
# Now use: ai "your question"
Configuration
Configuration File
CLIAI stores configuration in ~/.config/cliai/config.toml:
model = "mistral"
provider = "ollama"
auto_execute = false
dry_run = false
safety_level = "Medium"
context_timeout = 5000
ollama_url = "http://localhost:11434"
prefix = "cliai"
Commands
Model Management
# List available models
cliai list-models
# Select a model
cliai select mistral
# List providers
cliai list-providers
API Key Management
# Set API key
cliai set-key openai sk-...
# Test connection
cliai test-key openai
# Remove key
cliai remove-key openai
Safety Settings
# Set safety level
cliai safety-level high # Maximum safety
cliai safety-level medium # Balanced (default)
cliai safety-level low # Minimal checks
# Enable auto-execution
cliai auto-execute --enable
# Enable dry-run mode
cliai dry-run --enable
Other Settings
# View current configuration
cliai config
# Clear chat history
cliai clear
# Check provider status
cliai provider-status
# View performance metrics
cliai performance-status
Basic Commands
Command Syntax
cliai "your question or request"
Examples
File Operations
cliai "list all files in current directory"
cliai "find files larger than 100MB"
cliai "compress this folder"
cliai "extract archive.tar.gz"
System Information
cliai "check disk usage"
cliai "show memory usage"
cliai "list running processes"
cliai "what's my IP address?"
Git Operations
cliai "show git status"
cliai "create a new branch"
cliai "undo last commit"
cliai "show commit history"
Network Operations
cliai "test internet connection"
cliai "download file from URL"
cliai "check if port 8080 is open"
Package Management
cliai "install package nginx"
cliai "update all packages"
cliai "search for package python"
AI Providers
CLIAI supports multiple AI providers for maximum flexibility.
Ollama (Local)
Recommended for privacy and offline use
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull models
ollama pull mistral
ollama pull llama2
ollama pull codellama
# Configure CLIAI
cliai select mistral
Advantages:
- Complete privacy
- No API costs
- Works offline
- Fast responses
OpenAI
# Set API key
cliai set-key openai sk-your-key-here
# Select model
cliai select gpt-4
cliai select gpt-3.5-turbo
Available Models:
- gpt-4
- gpt-4-turbo
- gpt-3.5-turbo
Anthropic
# Set API key
cliai set-key anthropic your-key-here
# Select model
cliai select claude-3-sonnet
cliai select claude-3-haiku
cliai select claude-3-opus
Available Models:
- claude-3-opus
- claude-3-sonnet
- claude-3-haiku
Provider Fallback
CLIAI automatically falls back to alternative providers if the primary one fails:
- Try local Ollama first (if configured)
- Fall back to cloud provider (if API key set)
- Circuit breaker prevents repeated failures
Safety Features
CLIAI includes multiple layers of safety to protect your system.
Safety Levels
High (Recommended for beginners)
- Blocks dangerous operations
- Requires confirmation for system changes
- Maximum validation
Medium (Default)
- Balanced safety and convenience
- Confirms risky operations
- Standard validation
Low (Experienced users)
- Minimal safety checks
- Allows most operations
- Basic validation only
Command Validation
Every command goes through multiple validation layers:
- Syntax Checking: Validates shell syntax
- Placeholder Detection: Catches AI hallucinations
- Risk Assessment: Categorizes command danger level
- Pattern Matching: Detects known dangerous patterns
Dangerous Patterns Detected
rm -rf /- System deletiondd if=/dev/zero- Disk wipingchmod 777- Insecure permissions:(){ :|:& };:- Fork bombscurl | sh- Unverified script execution
Execution Modes
Manual (Default)
Commands are displayed for you to review and execute manually.
Auto-Execute
cliai auto-execute --enable
Safe commands execute automatically. Dangerous commands still require confirmation.
Dry-Run
cliai dry-run --enable
Shows what would be executed without actually running commands.
Architecture
CLIAI follows a modular architecture designed for reliability and extensibility.
Core Components
Orchestrator
Central coordinator managing AI providers and request routing.
Intent Classifier
Determines the type of request (command, question, explanation).
Context Gatherer
Collects system information for better responses.
Command Validator
Multi-layer validation with security checks.
Execution Engine
Safe command execution with multiple modes.
Performance Monitor
Tracks metrics and system health.
Circuit Breakers
Automatic failover between providers.
Request Flow
User Input
โ
Intent Classification
โ
Context Gathering
โ
AI Provider (Ollama/OpenAI/Anthropic)
โ
Command Validation
โ
Safety Checks
โ
Execution (Manual/Auto/Dry-run)
Provider Management
CLIAI uses a sophisticated provider management system:
- Priority-based routing: Local first, cloud fallback
- Circuit breakers: Prevent repeated failures
- Retry logic: Automatic retries with backoff
- Performance monitoring: Track response times
- Health checks: Verify provider availability
API Documentation
For detailed Rust API documentation, see:
Key Modules
agents- AI orchestration and provider managementconfig- Configuration managementvalidation- Command validation and safetyexecution- Command execution engineproviders- AI provider implementationsperformance- Performance monitoring
Contributing to CLIAI
Thank you for your interest in contributing to CLIAI! We welcome contributions from the community and are excited to see what you'll bring to the project.
๐ Getting Started
Prerequisites
- Rust 1.70 or later
- Git
- Ollama (for testing AI functionality)
Development Setup
-
Fork and Clone
git clone https://github.com/yourusername/cliai.git cd cliai -
Install Dependencies
cargo build -
Set up Ollama (for testing)
# Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # Pull a test model ollama pull mistral -
Run Tests
# Unit tests cargo test # Integration tests with AI cargo run -- test --quick
๐ฏ How to Contribute
Reporting Issues
- Use the GitHub Issues page
- Search existing issues before creating a new one
- Include detailed reproduction steps
- Provide system information (OS, Rust version, etc.)
Suggesting Features
- Open a GitHub Discussion first
- Describe the use case and expected behavior
- Consider implementation complexity and maintenance burden
Code Contributions
-
Create a Feature Branch
git checkout -b feature/your-feature-name -
Make Your Changes
- Follow the existing code style
- Add tests for new functionality
- Update documentation as needed
-
Test Your Changes
# Run all tests cargo test # Test with real AI (requires Ollama) cargo run -- test --categories "your-test-category" # Check formatting cargo fmt --check # Run clippy cargo clippy -- -D warnings -
Commit Your Changes
git add . git commit -m "feat: add amazing new feature" -
Push and Create PR
git push origin feature/your-feature-name
๐ Code Style Guidelines
Rust Code Style
- Use
cargo fmtfor consistent formatting - Follow Rust naming conventions
- Add documentation comments for public APIs
- Use
clippysuggestions to improve code quality
Commit Messages
We follow the Conventional Commits specification:
feat:- New featuresfix:- Bug fixesdocs:- Documentation changesstyle:- Code style changes (formatting, etc.)refactor:- Code refactoringtest:- Adding or updating testschore:- Maintenance tasks
Examples:
feat: add support for custom AI providers
fix: resolve command validation edge case
docs: update installation instructions
test: add integration tests for file operations
๐๏ธ Project Structure
Understanding the codebase:
src/
โโโ src/
โ โโโ main.rs # CLI interface and entry point
โ โโโ lib.rs # Library exports
โ โโโ agents/ # AI orchestration
โ โ โโโ mod.rs # Main orchestrator
โ โ โโโ profiles.rs # AI model profiles
โ โโโ config.rs # Configuration management
โ โโโ context.rs # System context gathering
โ โโโ execution.rs # Command execution
โ โโโ validation.rs # Command validation
โ โโโ providers.rs # AI provider implementations
โ โโโ history.rs # Chat history
โ โโโ performance.rs # Performance monitoring
โ โโโ error_handling.rs # Error handling
โ โโโ logging.rs # Privacy-preserving logging
โ โโโ test_suite.rs # Testing framework
โโโ Cargo.toml # Dependencies and metadata
โโโ README.md # Project documentation
โโโ CONTRIBUTING.md # This file
๐งช Testing Guidelines
Unit Tests
- Write tests for all public functions
- Use descriptive test names
- Test both success and error cases
- Mock external dependencies when possible
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_command_validation_success() { // Test implementation } #[test] fn test_command_validation_failure() { // Test implementation } } }
Integration Tests
- Test real AI interactions when possible
- Use the built-in test suite for comprehensive testing
- Add new test categories for new features
Performance Tests
- Monitor performance impact of changes
- Use the built-in performance monitoring
- Add benchmarks for critical paths
๐ Security Considerations
Command Safety
- All command generation must go through validation
- New validation rules should be thoroughly tested
- Consider security implications of new features
Privacy Protection
- Never log user commands or prompts in production
- Ensure debug mode requires explicit consent
- Review data handling in new features
AI Provider Security
- Validate all AI responses
- Implement proper error handling for AI failures
- Consider rate limiting and abuse prevention
๐ Documentation
Code Documentation
- Document all public APIs with rustdoc comments
- Include examples in documentation
- Explain complex algorithms and design decisions
User Documentation
- Update README.md for user-facing changes
- Add examples for new features
- Update configuration documentation
๐ Recognition
Contributors will be recognized in:
- GitHub contributors list
- Release notes for significant contributions
- README acknowledgments section
โ Questions?
- Open a GitHub Discussion
- Check existing issues and discussions
- Reach out to maintainers
๐ Pull Request Checklist
Before submitting your PR, ensure:
- Code follows project style guidelines
-
All tests pass (
cargo test) - New functionality includes tests
- Documentation is updated
- Commit messages follow conventional format
- No sensitive information is included
- Performance impact is considered
- Security implications are reviewed
Thank you for contributing to CLIAI! ๐