Level 0: Foundations~20 min

Environment Setup

Get your machine ready for AI engineering. Python, API keys, GPU drivers, and essential packages.

Python Environment Management

AI projects have complex dependencies. You need isolated environments to avoid conflicts. Here are your options:

RECOMMENDED

uv - Fast Python Package Manager

10-100x faster than pip. Written in Rust. Handles virtual environments, package installation, and Python version management.

Install and use uv
# Install uv (macOS/Linux)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Or with pip
pip install uv

# Create a new project with virtual environment
uv init my-ai-project
cd my-ai-project

# Or just create a venv in existing directory
uv venv
source .venv/bin/activate  # Linux/macOS
# .venv\Scripts\activate  # Windows

# Install packages (fast!)
uv pip install transformers torch openai anthropic

# Install from requirements.txt
uv pip install -r requirements.txt

# Sync dependencies from pyproject.toml
uv sync

venv - Built-in Python

Standard library, no extra installation needed. Slower than uv but universally available.

python -m venv .venv source .venv/bin/activate pip install transformers openai

conda / mamba - Data Science Standard

Best for complex native dependencies (CUDA, scientific computing). Slower but handles compiled packages well.

conda create -n ai-env python=3.11 conda activate ai-env conda install pytorch transformers -c conda-forge

When to Use What

ToolBest ForSpeed
uvMost AI/ML projects, fast iterationVery Fast
venv + pipSimple projects, no extra toolsModerate
condaComplex CUDA deps, scientific computingSlow

API Keys Setup

You will need API keys from providers to use their models. Keep these secure - never commit to git.

OAI

OpenAI

GPT-4o, embeddings, Whisper

  1. 1. Go to platform.openai.com/api-keys
  2. 2. Click "Create new secret key"
  3. 3. Copy the key (you will not see it again)
  4. 4. Add billing at platform.openai.com/account/billing
# Add to .env file
OPENAI_API_KEY=sk-...
ANT

Anthropic

Claude 3.5 Sonnet, Opus, Haiku

  1. 1. Go to console.anthropic.com/settings/keys
  2. 2. Click "Create Key"
  3. 3. Copy the key
  4. 4. Add credits at console.anthropic.com/settings/billing
# Add to .env file
ANTHROPIC_API_KEY=sk-ant-...
HF

HuggingFace

Models, datasets, Inference API

  1. 1. Go to huggingface.co/settings/tokens
  2. 2. Click "New token"
  3. 3. Select "Read" or "Write" permissions
  4. 4. Copy the token
# Add to .env file
HF_TOKEN=hf_...

Security: Managing .env Files

# 1. Create .env file touch .env # 2. Add to .gitignore (IMPORTANT!) echo ".env" >> .gitignore # 3. Load in Python with python-dotenv from dotenv import load_dotenv import os load_dotenv() api_key = os.getenv("OPENAI_API_KEY")

Never commit API keys to version control. Use environment variables in production.

GPU Setup for Local Models

Running models locally requires GPU acceleration. Setup differs by hardware.

NV

NVIDIA GPU (CUDA)

Best support for ML frameworks. Requires CUDA toolkit and cuDNN.

Check GPU and install PyTorch
# Check NVIDIA driver
nvidia-smi

# Install CUDA toolkit (Ubuntu)
# Visit: https://developer.nvidia.com/cuda-downloads

# Install PyTorch with CUDA support
uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# Verify CUDA is available
python -c "import torch; print(torch.cuda.is_available())"
# Output: True
MPS

Apple Silicon (M1/M2/M3/M4)

Uses Metal Performance Shaders (MPS). Works out of the box with recent PyTorch.

Setup for Apple Silicon
# Install PyTorch (MPS support included)
uv pip install torch torchvision torchaudio

# Verify MPS is available
python -c "import torch; print(torch.backends.mps.is_available())"
# Output: True

# Use MPS in your code
device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
model = model.to(device)
CPU

CPU Only

Still possible for smaller models. Use quantized models and optimized runtimes.

# Use llama.cpp for efficient CPU inference # Or use Ollama which handles optimization ollama run llama3.2:3b

Essential Packages

These packages cover most AI engineering needs. Install them in your virtual environment.

requirements.txt
# LLM APIs
openai>=1.0.0
anthropic>=0.18.0

# HuggingFace ecosystem
transformers>=4.36.0
sentence-transformers>=2.2.0
datasets>=2.15.0
accelerate>=0.25.0

# PyTorch (install separately with CUDA if needed)
torch>=2.1.0

# Environment management
python-dotenv>=1.0.0

# Tokenization
tiktoken>=0.5.0

# Data processing
numpy>=1.24.0
pandas>=2.0.0

# Web/API
requests>=2.31.0
httpx>=0.25.0

# Vector databases (pick one)
chromadb>=0.4.0
# pinecone-client>=2.2.0
# qdrant-client>=1.7.0

# Optional: Local inference
# ollama>=0.1.0
# vllm>=0.2.0

Quick Install

# With uv (recommended) uv pip install openai anthropic transformers sentence-transformers tiktoken python-dotenv # Or with pip pip install openai anthropic transformers sentence-transformers tiktoken python-dotenv

Docker for Reproducibility

Docker ensures your environment works identically across machines. Essential for production.

Dockerfile
FROM python:3.11-slim

WORKDIR /app

# Install uv
RUN pip install uv

# Copy dependency files
COPY pyproject.toml .
COPY requirements.txt .

# Install dependencies
RUN uv pip install --system -r requirements.txt

# Copy application
COPY . .

# Run
CMD ["python", "main.py"]

GPU with Docker

# Use NVIDIA base image for GPU support FROM nvidia/cuda:12.1-runtime-ubuntu22.04 # Run with GPU access docker run --gpus all my-ai-image

Development Environments

Jupyter Notebooks

Best for:
  • - Exploration and prototyping
  • - Data visualization
  • - Teaching and documentation
  • - Quick experiments
uv pip install jupyter jupyter notebook

Python Scripts

Best for:
  • - Production code
  • - Version control
  • - Testing and CI/CD
  • - Scheduled jobs
python main.py pytest tests/

IDE Recommendations

VS Code

Free, great extensions, Jupyter support built-in.

PyCharm

Best Python IDE, strong refactoring, Pro has AI features.

Cursor

VS Code fork with AI-native features, code generation.

Verification Script

Run this script to verify your setup is working correctly.

verify_setup.py
#!/usr/bin/env python3
"""Verify AI development environment setup."""

import sys

def check_python():
    print(f"Python version: {sys.version}")
    assert sys.version_info >= (3, 10), "Python 3.10+ required"
    print("  Python: OK")

def check_packages():
    packages = [
        ("openai", "openai"),
        ("anthropic", "anthropic"),
        ("transformers", "transformers"),
        ("sentence_transformers", "sentence-transformers"),
        ("tiktoken", "tiktoken"),
        ("torch", "torch"),
        ("dotenv", "python-dotenv"),
    ]

    for import_name, pip_name in packages:
        try:
            __import__(import_name)
            print(f"  {pip_name}: OK")
        except ImportError:
            print(f"  {pip_name}: MISSING - run: uv pip install {pip_name}")

def check_gpu():
    try:
        import torch
        if torch.cuda.is_available():
            print(f"  CUDA: Available ({torch.cuda.get_device_name(0)})")
        elif hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
            print("  MPS (Apple Silicon): Available")
        else:
            print("  GPU: Not available (CPU only)")
    except ImportError:
        print("  GPU: PyTorch not installed")

def check_api_keys():
    import os
    from dotenv import load_dotenv
    load_dotenv()

    keys = [
        ("OPENAI_API_KEY", "OpenAI"),
        ("ANTHROPIC_API_KEY", "Anthropic"),
        ("HF_TOKEN", "HuggingFace"),
    ]

    for key, name in keys:
        if os.getenv(key):
            print(f"  {name}: Set")
        else:
            print(f"  {name}: Not set (optional)")

def test_openai():
    import os
    from dotenv import load_dotenv
    load_dotenv()

    if not os.getenv("OPENAI_API_KEY"):
        print("  OpenAI API: Skipped (no key)")
        return

    from openai import OpenAI
    client = OpenAI()
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Say 'test successful' in 2 words"}],
        max_tokens=10
    )
    print(f"  OpenAI API: OK - {response.choices[0].message.content}")

if __name__ == "__main__":
    print("\n=== Environment Verification ===\n")

    print("1. Python Version")
    check_python()

    print("\n2. Required Packages")
    check_packages()

    print("\n3. GPU Availability")
    check_gpu()

    print("\n4. API Keys (.env)")
    check_api_keys()

    print("\n5. API Connection Test")
    try:
        test_openai()
    except Exception as e:
        print(f"  OpenAI API: Error - {e}")

    print("\n=== Verification Complete ===\n")

Run the Verification

# Save as verify_setup.py and run python verify_setup.py

Troubleshooting Common Issues

CUDA out of memory

Model too large for your GPU VRAM.

Fix: Use a smaller model, reduce batch size, or use quantization (4-bit/8-bit).

ModuleNotFoundError

Package not installed or wrong environment activated.

Fix: Ensure virtual environment is activated, then reinstall the package.

API rate limit exceeded

Too many requests to the API in a short time.

Fix: Add delays between requests, implement exponential backoff, or upgrade your plan.

torch.cuda.is_available() returns False

PyTorch not detecting your GPU.

Fix: Install PyTorch with CUDA support: pip install torch --index-url https://download.pytorch.org/whl/cu121

Key Takeaways

  • 1

    Use uv for package management - 10-100x faster than pip, handles virtual environments and Python versions.

  • 2

    Secure your API keys - Use .env files, add to .gitignore, load with python-dotenv.

  • 3

    GPU acceleration is optional - APIs work anywhere. Local models benefit from CUDA (NVIDIA) or MPS (Apple).

  • 4

    Verify your setup - Run the verification script before starting any project.