Files
everything-claude-code/.opencode/commands/e2e.md
Affaan Mustafa 6d440c036d feat: complete OpenCode plugin support with hooks, tools, and commands
Major OpenCode integration overhaul:

- llms.txt: Comprehensive OpenCode documentation for LLMs (642 lines)
- .opencode/plugins/ecc-hooks.ts: All Claude Code hooks translated to OpenCode's plugin system
- .opencode/tools/*.ts: 3 custom tools (run-tests, check-coverage, security-audit)
- .opencode/commands/*.md: All 24 commands in OpenCode format
- .opencode/package.json: npm package structure for opencode-ecc
- .opencode/index.ts: Main plugin entry point

- Delete incorrect LIMITATIONS.md (hooks ARE supported via plugins)
- Rewrite MIGRATION.md with correct hook event mapping
- Update README.md OpenCode section to show full feature parity

OpenCode has 20+ events vs Claude Code's 3 phases:
- PreToolUse → tool.execute.before
- PostToolUse → tool.execute.after
- Stop → session.idle
- SessionStart → session.created
- SessionEnd → session.deleted
- Plus: file.edited, file.watcher.updated, permission.asked, todo.updated

- 12 agents: Full parity
- 24 commands: Full parity (+1 from original 23)
- 16 skills: Full parity
- Hooks: OpenCode has MORE (20+ events vs 3 phases)
- Custom Tools: 3 native OpenCode tools

The OpenCode configuration can now be:
1. Used directly: cd everything-claude-code && opencode
2. Installed via npm: npm install opencode-ecc
2026-02-05 05:14:33 -08:00

2.2 KiB

description, agent, subtask
description agent subtask
Generate and run E2E tests with Playwright e2e-runner true

E2E Command

Generate and run end-to-end tests using Playwright: $ARGUMENTS

Your Task

  1. Analyze user flow to test
  2. Create test journey with Playwright
  3. Run tests and capture artifacts
  4. Report results with screenshots/videos

Test Structure

import { test, expect } from '@playwright/test'

test.describe('Feature: [Name]', () => {
  test.beforeEach(async ({ page }) => {
    // Setup: Navigate, authenticate, prepare state
  })

  test('should [expected behavior]', async ({ page }) => {
    // Arrange: Set up test data

    // Act: Perform user actions
    await page.click('[data-testid="button"]')
    await page.fill('[data-testid="input"]', 'value')

    // Assert: Verify results
    await expect(page.locator('[data-testid="result"]')).toBeVisible()
  })

  test.afterEach(async ({ page }, testInfo) => {
    // Capture screenshot on failure
    if (testInfo.status !== 'passed') {
      await page.screenshot({ path: `test-results/${testInfo.title}.png` })
    }
  })
})

Best Practices

Selectors

  • Prefer data-testid attributes
  • Avoid CSS classes (they change)
  • Use semantic selectors (roles, labels)

Waits

  • Use Playwright's auto-waiting
  • Avoid page.waitForTimeout()
  • Use expect().toBeVisible() for assertions

Test Isolation

  • Each test should be independent
  • Clean up test data after
  • Don't rely on test order

Artifacts to Capture

  • Screenshots on failure
  • Videos for debugging
  • Trace files for detailed analysis
  • Network logs if relevant

Test Categories

  1. Critical User Flows

    • Authentication (login, logout, signup)
    • Core feature happy paths
    • Payment/checkout flows
  2. Edge Cases

    • Network failures
    • Invalid inputs
    • Session expiry
  3. Cross-Browser

    • Chrome, Firefox, Safari
    • Mobile viewports

Report Format

E2E Test Results
================
✅ Passed: X
❌ Failed: Y
⏭️ Skipped: Z

Failed Tests:
- test-name: Error message
  Screenshot: path/to/screenshot.png
  Video: path/to/video.webm

TIP: Run with --headed flag for debugging: npx playwright test --headed