Files
everything-claude-code/skills/python-patterns/SKILL.md
Freakk e7cb442843 feat: add Python/Django support and enhance READMEs (#139)
## Python Support
- **agents/python-reviewer.md**: Expert Python code review agent with PEP 8 compliance, type hints, security, and performance checks
- **commands/python-review.md**: Slash command for automated Python code review with ruff, mypy, pylint, black, bandit
- **skills/python-patterns/SKILL.md**: Python idioms, type hints, error handling, context managers, decorators, concurrency
- **skills/python-testing/SKILL.md**: pytest configuration, fixtures, parametrization, mocking, async testing, TDD methodology

## Django Support
- **skills/django-patterns/SKILL.md**: Django architecture, DRF patterns, project structure, QuerySets, serializers, ViewSets, service layer, caching
- **skills/django-security/SKILL.md**: Django security best practices, authentication, CSRF, SQL injection, XSS prevention, production settings
- **skills/django-tdd/SKILL.md**: Django testing with pytest-django, Factory Boy, model testing, API testing, integration testing
- **skills/django-verification/SKILL.md**: Pre-deployment verification loop including migrations, tests, security scans, performance checks

## Documentation Enhancements
- **Quick Start**: Added 3-step quick start guide to all READMEs (EN, zh-CN, zh-TW)
- **Beautification**: Added emoji icons for better visual hierarchy across all READMEs
- **.claude-plugin/plugin.json**: Added python-reviewer to agents list

All files follow project conventions with proper frontmatter, markdown formatting, and comprehensive code examples.

Co-authored-by: Freakz3z <freakk@FreakkdeMacBook-Air.local>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-01 04:05:02 -08:00

750 lines
16 KiB
Markdown

---
name: python-patterns
description: Pythonic idioms, PEP 8 standards, type hints, and best practices for building robust, efficient, and maintainable Python applications.
---
# Python Development Patterns
Idiomatic Python patterns and best practices for building robust, efficient, and maintainable applications.
## When to Activate
- Writing new Python code
- Reviewing Python code
- Refactoring existing Python code
- Designing Python packages/modules
## Core Principles
### 1. Readability Counts
Python prioritizes readability. Code should be obvious and easy to understand.
```python
# Good: Clear and readable
def get_active_users(users: list[User]) -> list[User]:
"""Return only active users from the provided list."""
return [user for user in users if user.is_active]
# Bad: Clever but confusing
def get_active_users(u):
return [x for x in u if x.a]
```
### 2. Explicit is Better Than Implicit
Avoid magic; be clear about what your code does.
```python
# Good: Explicit configuration
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Bad: Hidden side effects
import some_module
some_module.setup() # What does this do?
```
### 3. EAFP - Easier to Ask Forgiveness Than Permission
Python prefers exception handling over checking conditions.
```python
# Good: EAFP style
def get_value(dictionary: dict, key: str) -> Any:
try:
return dictionary[key]
except KeyError:
return default_value
# Bad: LBYL (Look Before You Leap) style
def get_value(dictionary: dict, key: str) -> Any:
if key in dictionary:
return dictionary[key]
else:
return default_value
```
## Type Hints
### Basic Type Annotations
```python
from typing import Optional, List, Dict, Any
def process_user(
user_id: str,
data: Dict[str, Any],
active: bool = True
) -> Optional[User]:
"""Process a user and return the updated User or None."""
if not active:
return None
return User(user_id, data)
```
### Modern Type Hints (Python 3.9+)
```python
# Python 3.9+ - Use built-in types
def process_items(items: list[str]) -> dict[str, int]:
return {item: len(item) for item in items}
# Python 3.8 and earlier - Use typing module
from typing import List, Dict
def process_items(items: List[str]) -> Dict[str, int]:
return {item: len(item) for item in items}
```
### Type Aliases and TypeVar
```python
from typing import TypeVar, Union
# Type alias for complex types
JSON = Union[dict[str, Any], list[Any], str, int, float, bool, None]
def parse_json(data: str) -> JSON:
return json.loads(data)
# Generic types
T = TypeVar('T')
def first(items: list[T]) -> T | None:
"""Return the first item or None if list is empty."""
return items[0] if items else None
```
### Protocol-Based Duck Typing
```python
from typing import Protocol
class Renderable(Protocol):
def render(self) -> str:
"""Render the object to a string."""
def render_all(items: list[Renderable]) -> str:
"""Render all items that implement the Renderable protocol."""
return "\n".join(item.render() for item in items)
```
## Error Handling Patterns
### Specific Exception Handling
```python
# Good: Catch specific exceptions
def load_config(path: str) -> Config:
try:
with open(path) as f:
return Config.from_json(f.read())
except FileNotFoundError as e:
raise ConfigError(f"Config file not found: {path}") from e
except json.JSONDecodeError as e:
raise ConfigError(f"Invalid JSON in config: {path}") from e
# Bad: Bare except
def load_config(path: str) -> Config:
try:
with open(path) as f:
return Config.from_json(f.read())
except:
return None # Silent failure!
```
### Exception Chaining
```python
def process_data(data: str) -> Result:
try:
parsed = json.loads(data)
except json.JSONDecodeError as e:
# Chain exceptions to preserve the traceback
raise ValueError(f"Failed to parse data: {data}") from e
```
### Custom Exception Hierarchy
```python
class AppError(Exception):
"""Base exception for all application errors."""
pass
class ValidationError(AppError):
"""Raised when input validation fails."""
pass
class NotFoundError(AppError):
"""Raised when a requested resource is not found."""
pass
# Usage
def get_user(user_id: str) -> User:
user = db.find_user(user_id)
if not user:
raise NotFoundError(f"User not found: {user_id}")
return user
```
## Context Managers
### Resource Management
```python
# Good: Using context managers
def process_file(path: str) -> str:
with open(path, 'r') as f:
return f.read()
# Bad: Manual resource management
def process_file(path: str) -> str:
f = open(path, 'r')
try:
return f.read()
finally:
f.close()
```
### Custom Context Managers
```python
from contextlib import contextmanager
@contextmanager
def timer(name: str):
"""Context manager to time a block of code."""
start = time.perf_counter()
yield
elapsed = time.perf_counter() - start
print(f"{name} took {elapsed:.4f} seconds")
# Usage
with timer("data processing"):
process_large_dataset()
```
### Context Manager Classes
```python
class DatabaseTransaction:
def __init__(self, connection):
self.connection = connection
def __enter__(self):
self.connection.begin_transaction()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is None:
self.connection.commit()
else:
self.connection.rollback()
return False # Don't suppress exceptions
# Usage
with DatabaseTransaction(conn):
user = conn.create_user(user_data)
conn.create_profile(user.id, profile_data)
```
## Comprehensions and Generators
### List Comprehensions
```python
# Good: List comprehension for simple transformations
names = [user.name for user in users if user.is_active]
# Bad: Manual loop
names = []
for user in users:
if user.is_active:
names.append(user.name)
# Complex comprehensions should be expanded
# Bad: Too complex
result = [x * 2 for x in items if x > 0 if x % 2 == 0]
# Good: Use a generator function
def filter_and_transform(items: Iterable[int]) -> list[int]:
result = []
for x in items:
if x > 0 and x % 2 == 0:
result.append(x * 2)
return result
```
### Generator Expressions
```python
# Good: Generator for lazy evaluation
total = sum(x * x for x in range(1_000_000))
# Bad: Creates large intermediate list
total = sum([x * x for x in range(1_000_000)])
```
### Generator Functions
```python
def read_large_file(path: str) -> Iterator[str]:
"""Read a large file line by line."""
with open(path) as f:
for line in f:
yield line.strip()
# Usage
for line in read_large_file("huge.txt"):
process(line)
```
## Data Classes and Named Tuples
### Data Classes
```python
from dataclasses import dataclass, field
from datetime import datetime
@dataclass
class User:
"""User entity with automatic __init__, __repr__, and __eq__."""
id: str
name: str
email: str
created_at: datetime = field(default_factory=datetime.now)
is_active: bool = True
# Usage
user = User(
id="123",
name="Alice",
email="alice@example.com"
)
```
### Data Classes with Validation
```python
@dataclass
class User:
email: str
age: int
def __post_init__(self):
# Validate email format
if "@" not in self.email:
raise ValueError(f"Invalid email: {self.email}")
# Validate age range
if self.age < 0 or self.age > 150:
raise ValueError(f"Invalid age: {self.age}")
```
### Named Tuples
```python
from typing import NamedTuple
class Point(NamedTuple):
"""Immutable 2D point."""
x: float
y: float
def distance(self, other: 'Point') -> float:
return ((self.x - other.x) ** 2 + (self.y - other.y) ** 2) ** 0.5
# Usage
p1 = Point(0, 0)
p2 = Point(3, 4)
print(p1.distance(p2)) # 5.0
```
## Decorators
### Function Decorators
```python
import functools
import time
def timer(func: Callable) -> Callable:
"""Decorator to time function execution."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f}s")
return result
return wrapper
@timer
def slow_function():
time.sleep(1)
# slow_function() prints: slow_function took 1.0012s
```
### Parameterized Decorators
```python
def repeat(times: int):
"""Decorator to repeat a function multiple times."""
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
def wrapper(*args, **kwargs):
results = []
for _ in range(times):
results.append(func(*args, **kwargs))
return results
return wrapper
return decorator
@repeat(times=3)
def greet(name: str) -> str:
return f"Hello, {name}!"
# greet("Alice") returns ["Hello, Alice!", "Hello, Alice!", "Hello, Alice!"]
```
### Class-Based Decorators
```python
class CountCalls:
"""Decorator that counts how many times a function is called."""
def __init__(self, func: Callable):
functools.update_wrapper(self, func)
self.func = func
self.count = 0
def __call__(self, *args, **kwargs):
self.count += 1
print(f"{self.func.__name__} has been called {self.count} times")
return self.func(*args, **kwargs)
@CountCalls
def process():
pass
# Each call to process() prints the call count
```
## Concurrency Patterns
### Threading for I/O-Bound Tasks
```python
import concurrent.futures
import threading
def fetch_url(url: str) -> str:
"""Fetch a URL (I/O-bound operation)."""
import urllib.request
with urllib.request.urlopen(url) as response:
return response.read().decode()
def fetch_all_urls(urls: list[str]) -> dict[str, str]:
"""Fetch multiple URLs concurrently using threads."""
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
future_to_url = {executor.submit(fetch_url, url): url for url in urls}
results = {}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
results[url] = future.result()
except Exception as e:
results[url] = f"Error: {e}"
return results
```
### Multiprocessing for CPU-Bound Tasks
```python
def process_data(data: list[int]) -> int:
"""CPU-intensive computation."""
return sum(x ** 2 for x in data)
def process_all(datasets: list[list[int]]) -> list[int]:
"""Process multiple datasets using multiple processes."""
with concurrent.futures.ProcessPoolExecutor() as executor:
results = list(executor.map(process_data, datasets))
return results
```
### Async/Await for Concurrent I/O
```python
import asyncio
async def fetch_async(url: str) -> str:
"""Fetch a URL asynchronously."""
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
async def fetch_all(urls: list[str]) -> dict[str, str]:
"""Fetch multiple URLs concurrently."""
tasks = [fetch_async(url) for url in urls]
results = await asyncio.gather(*tasks, return_exceptions=True)
return dict(zip(urls, results))
```
## Package Organization
### Standard Project Layout
```
myproject/
├── src/
│ └── mypackage/
│ ├── __init__.py
│ ├── main.py
│ ├── api/
│ │ ├── __init__.py
│ │ └── routes.py
│ ├── models/
│ │ ├── __init__.py
│ │ └── user.py
│ └── utils/
│ ├── __init__.py
│ └── helpers.py
├── tests/
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_api.py
│ └── test_models.py
├── pyproject.toml
├── README.md
└── .gitignore
```
### Import Conventions
```python
# Good: Import order - stdlib, third-party, local
import os
import sys
from pathlib import Path
import requests
from fastapi import FastAPI
from mypackage.models import User
from mypackage.utils import format_name
# Good: Use isort for automatic import sorting
# pip install isort
```
### __init__.py for Package Exports
```python
# mypackage/__init__.py
"""mypackage - A sample Python package."""
__version__ = "1.0.0"
# Export main classes/functions at package level
from mypackage.models import User, Post
from mypackage.utils import format_name
__all__ = ["User", "Post", "format_name"]
```
## Memory and Performance
### Using __slots__ for Memory Efficiency
```python
# Bad: Regular class uses __dict__ (more memory)
class Point:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
# Good: __slots__ reduces memory usage
class Point:
__slots__ = ['x', 'y']
def __init__(self, x: float, y: float):
self.x = x
self.y = y
```
### Generator for Large Data
```python
# Bad: Returns full list in memory
def read_lines(path: str) -> list[str]:
with open(path) as f:
return [line.strip() for line in f]
# Good: Yields lines one at a time
def read_lines(path: str) -> Iterator[str]:
with open(path) as f:
for line in f:
yield line.strip()
```
### Avoid String Concatenation in Loops
```python
# Bad: O(n²) due to string immutability
result = ""
for item in items:
result += str(item)
# Good: O(n) using join
result = "".join(str(item) for item in items)
# Good: Using StringIO for building
from io import StringIO
buffer = StringIO()
for item in items:
buffer.write(str(item))
result = buffer.getvalue()
```
## Python Tooling Integration
### Essential Commands
```bash
# Code formatting
black .
isort .
# Linting
ruff check .
pylint mypackage/
# Type checking
mypy .
# Testing
pytest --cov=mypackage --cov-report=html
# Security scanning
bandit -r .
# Dependency management
pip-audit
safety check
```
### pyproject.toml Configuration
```toml
[project]
name = "mypackage"
version = "1.0.0"
requires-python = ">=3.9"
dependencies = [
"requests>=2.31.0",
"pydantic>=2.0.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.4.0",
"pytest-cov>=4.1.0",
"black>=23.0.0",
"ruff>=0.1.0",
"mypy>=1.5.0",
]
[tool.black]
line-length = 88
target-version = ['py39']
[tool.ruff]
line-length = 88
select = ["E", "F", "I", "N", "W"]
[tool.mypy]
python_version = "3.9"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "--cov=mypackage --cov-report=term-missing"
```
## Quick Reference: Python Idioms
| Idiom | Description |
|-------|-------------|
| EAFP | Easier to Ask Forgiveness than Permission |
| Context managers | Use `with` for resource management |
| List comprehensions | For simple transformations |
| Generators | For lazy evaluation and large datasets |
| Type hints | Annotate function signatures |
| Dataclasses | For data containers with auto-generated methods |
| `__slots__` | For memory optimization |
| f-strings | For string formatting (Python 3.6+) |
| `pathlib.Path` | For path operations (Python 3.4+) |
| `enumerate` | For index-element pairs in loops |
## Anti-Patterns to Avoid
```python
# Bad: Mutable default arguments
def append_to(item, items=[]):
items.append(item)
return items
# Good: Use None and create new list
def append_to(item, items=None):
if items is None:
items = []
items.append(item)
return items
# Bad: Checking type with type()
if type(obj) == list:
process(obj)
# Good: Use isinstance
if isinstance(obj, list):
process(obj)
# Bad: Comparing to None with ==
if value == None:
process()
# Good: Use is
if value is None:
process()
# Bad: from module import *
from os.path import *
# Good: Explicit imports
from os.path import join, exists
# Bad: Bare except
try:
risky_operation()
except:
pass
# Good: Specific exception
try:
risky_operation()
except SpecificError as e:
logger.error(f"Operation failed: {e}")
```
__Remember__: Python code should be readable, explicit, and follow the principle of least surprise. When in doubt, prioritize clarity over cleverness.