Compare commits

...

107 Commits

Author SHA1 Message Date
yyhuni
7fd832ce22 chore(ci): update CI configuration and Makefile for consistency checks
- Fixed the runner version in the CI workflow to ubuntu-22.04 to prevent changes in CI behavior due to runner updates.
- Updated Go version in the CI setup to 1.24 to align with go.mod.
- Added a new test step in the Makefile to check version consistency across all workflows.
2026-02-01 19:26:49 +08:00
yyhuni
e76ecaac15 refactor(flow): 优化节点连线样式和边处理逻辑
- 提取公共 className 和样式函数,统一处理 source 端 Handle 的位置偏移
- 替换箭头标记为闭合箭头并调整大小和样式,提升视觉效果
- 重构 edges 生成函数,支持双向连线添加两个边对象
- 禁用节点拖拽,防止误操作时节点移动
- 移除多余的 markerStart 属性,简化边配置
- 统一边的动画、样式、标签渲染逻辑,提升代码复用性和可维护性
2026-02-01 16:44:05 +08:00
yyhuni
08e6c7fbe3 refactor(agent): 使用统一日志系统替换打印实现
- 新增logger模块,提供基于zap的日志管理
- agent主程序及内部模块改为使用zap日志记录信息和错误
- agent内部关键事件增加详细日志输出
- 配置日志级别和环境变量控制日志格式和输出
- websocket和task客户端启用TLS跳过验证并记录连接日志
- 任务接收、取消和配置更新过程中增加结构化日志记录
- 更新过程中添加panic捕获日志及状态更新
- 移除.vscode/settings.json配置文件
- 更新Dockerfile基础镜像版本和环境变量
- .gitignore添加SSL证书相关忽略规则
- 调整Go模块依赖,新增多个日志和相关库依赖
2026-02-01 12:52:14 +08:00
yyhuni
5adb239547 feat(core): 优化主题与路由预加载及引导加载体验
- 前端主布局根据主题 Cookie 设置 data-theme 和暗模式 class
- 移除不必要的内联引导加载 CSS,改用主题初始化组件注入关键样式
- 登录布局新增相同内联引导加载样式,实现页面加载前显示效果
- 登录页中添加等待页面加载和路由预加载完成后隐藏引导加载的逻辑
- 侧边栏点击导航时派发自定义事件以触发路由进度条显示
- 路由进度条支持手动启动和超时自动完成,完善加载状态管理
- useColorTheme hook 增加 Cookie 支持并统一主题缓存和更新逻辑
- useRoutePrefetch hook 加强多语言支持,自动带上 locale 前缀预加载路由
- 验证组件中加载状态根据认证及加载情况动态显示,避免未认证闪烁
- LoadingState 组件新增渐隐动画和可控激活状态,提升加载体验
2026-01-30 16:10:57 +08:00
yyhuni
896ae7743d chore: rename backend references to LunaFox 2026-01-30 13:01:02 +08:00
yyhuni
d5c363294b chore(frontend): rebrand to LunaFox and cleanup 2026-01-30 12:52:22 +08:00
yyhuni
4734f7a576 feat(ui): 优化登录页启动动画与启动画面实现
- 移除LoginBootScreen组件,改为在全局布局内以内联样式呈现启动画面
- 登录页移除PixelBlast动画,替换为电路板风格的背景动画,简化加载逻辑
- 登录页启动页改为通过DOM操作隐藏并移除内联的启动画元素,保证平滑过渡
- global.css新增基础加载动画与盾牌加载器的样式支持
- AppSidebar和AboutDialog组件替换logo图标,改用优化后的PNG格式logo资源
- auth-layout新增启动页自动隐藏逻辑,防止启动页干扰应用渲染
- PixelBlast组件去除随机时间种子,改为固定值实现确定性动画效果
- Shuffle组件增加forwardRef支持,暴露play接口,便于调用动画播放
- 删除废弃的icon.svg文件,改用PNG图标资源以提升兼容性和性能
- 优化启动动画文字内容和节奏,增强用户体验感知
- 细节优化包括CSS动画关键帧、颜色和布局微调,提升视觉表现一致性
2026-01-29 18:08:04 +08:00
yyhuni
46b1d5a1d1 refactor(workers): 移除代理密钥再生功能及相关代码
- 从 Agent 相关组件中移除密钥再生的 UI 和回调
- 删除 AgentList 组件中密钥再生相关的状态和处理函数
- 移除 useRegenerateAgentKey 钩子及其调用
- 更新 WebSocket 默认地址端口为 8080,替代原有 8888
- 调整环境变量默认后端地址端口为 8080
- 优化并简化前端组件导入,删除无用图标和组件依赖
2026-01-28 15:47:46 +08:00
yyhuni
66fa60c415 feat(agent): 实现任务执行与管理模块
- 新增 Executor 结构体,支持容器中运行任务并管理生命周期
- 实现任务启动、监控、取消和超时处理
- 增加任务取消标记机制,避免重复执行已取消任务
- Puller 添加负载感知的任务拉取逻辑及指数退避策略
- Updater 实现自动更新流程,包括镜像拉取和新容器启动
- WebSocket 客户端支持自动重连、心跳检测及消息处理机制
- 新增消息处理器,支持任务可用、任务取消、配置更新和更新请求的回调
- 调整前端 AgentCardCompact 组件样式,优化间距
- 删除无用的 frontend/logo-gallery.html 文件
2026-01-27 21:02:09 +08:00
yyhuni
3d54d26c7e feat(agent): 实现基础Agent功能及配置加载
- 添加agent主程序入口,支持信号中断优雅退出
- 实现Agent运行逻辑,包括WebSocket客户端、任务拉取、执行器及心跳发送
- 添加配置模块,支持环境变量及命令行参数解析和验证
- 实现配置实时更新机制,支持动态调整任务并发数及资源阈值
- 完成Docker客户端封装,支持容器创建、启动、停止及日志获取
- 实现任务拉取客户端及状态上报,包含重试机制
- 添加健康管理模块,管理Agent健康状态及状态变更时间
- 完成WebSocket消息处理,支持任务通知、任务取消及配置更新
- 添加指标采集,监控CPU、内存及磁盘使用率
- 各模块单位测试补充,保证基本逻辑正确性和异常处理覆盖
2026-01-27 16:47:58 +08:00
yyhuni
b4a289b198 feat(router): 新增多个模块的路由注册功能
- 新增 auth 认证相关路由,支持登录和刷新token接口
- 新增 directory 相关路由,支持批量操作及导出接口
- 新增 endpoint 相关路由,支持列表、导出、批量操作等功能
- 新增 engine 相关路由,支持增删改查接口
- 新增 health 健康检查路由,支持基本和状态检查接口
- 新增 host-port 相关路由,支持批量上报和删除功能
- 新增 organization 相关路由,支持组织管理及关联目标操作
- 新增 public 公开路由,支持截图图片访问接口
- 新增 scan-log 扫描日志路由,支持列表和批量创建
- 新增 scan 扫描路由,支持扫描管理及批量删除功能
- 新增 screenshot 截图路由,支持列表和批量操作
- 新增 snapshot 快照相关路由,支持多种资源批量操作和导出功能
- 新增 subdomain 子域名路由,支持导出和批量操作
- 新增 target 目标路由,支持批量创建、删除及管理功能
- 新增 user 用户路由,支持创建、列表和密码修改功能
- 新增 vulnerability 漏洞路由,支持统计、标记及批量操作
- 新增 website 网站路由,支持批量导入和删除等管理操作
- 新增 wordlist 字典路由,支持字典内容管理和下载
- 新增 worker 工作流相关路由,加入工作认证中间件保护,支持批量上报和字典下载
- 在配置中加入 PublicURL 支持,默认值为空字符串
- 更新 go.mod 和 go.sum,添加多个新的依赖包
- 修改.gitignore,新增.opencode忽略规则
- 新增 VERSION 文件,版本号设为v1.5.12-dev
2026-01-24 16:59:28 +08:00
yyhuni
b727b2d001 test: add and update tests for agent, server and worker components 2026-01-23 22:33:58 +08:00
yyhuni
b859fc9062 refactor(modules): 更新模块路径为新的github用户命名空间
- 修改所有import路径,从github.com/orbit/server改为github.com/yyhuni/orbit/server
- 更新go.mod模块名为github.com/yyhuni/orbit/server
- 调整内部引用路径,确保包导入一致性
- 修改.gitignore,新增AGENTS.md和WARP.md忽略规则
- 更新Scan请求中engineNames字段的绑定规则,改为必须且仅能包含一个元素
2026-01-23 18:31:54 +08:00
yyhuni
49b5fbef28 chore(docs): 删除冗余项目文档文件
- 移除了 AGENTS.md 文件,简化项目文档结构
- 移除了 WARP.md 文件,删除重复的操作指南
- 清理文档目录,减少维护负担
- 优化项目根目录内容,提升整体整洁性
2026-01-23 09:39:46 +08:00
yyhuni
11112a68f6 Remove .hypothesis, .DS_Store and log files from version control 2026-01-23 09:31:47 +08:00
yyhuni
9049b096ba Remove .venv and .kiro directories from version control 2026-01-23 09:29:51 +08:00
yyhuni
ca6c0eb082 Remove .kiro directory from version control 2026-01-23 09:28:09 +08:00
yyhuni
64bcd9a6f5 忽略 2026-01-23 09:20:32 +08:00
yyhuni
443e2172e4 忽略ai文件 2026-01-23 09:17:26 +08:00
yyhuni
c6dcfb0a5b Remove specs directory from version control 2026-01-23 09:14:46 +08:00
yyhuni
25ae325c69 Remove AI assistant directories from version
control
2026-01-23 09:12:21 +08:00
yyhuni
cab83d89cf chore(.agent,.gemini,.github): remove duplicate vercel-react-best-practices skills
- Remove vercel-react-best-practices skill directory from .agent/skills
- Remove vercel-react-best-practices skill directory from .gemini/skills
- Remove vercel-react-best-practices skill directory from .github/skills
- Eliminate redundant skill definitions across multiple agent configurations
- Consolidate skill management to reduce maintenance overhead
2026-01-22 22:46:59 +08:00
yyhuni
0f8fff2dc4 chore(.claude): reorganize Claude commands and skills structure
- Add speckit command suite (.claude/commands/) for workflow automation
- Reorganize Vercel React best practices skills with improved structure
- Add Hypothesis testing constants database
- Remove .dockerignore and .gitignore from repository
- Add .DS_Store to tracked files
- Consolidate development tooling and AI assistant configuration for improved project workflow
2026-01-22 22:46:31 +08:00
yyhuni
6e48b97dc2 chore(.specify): add project constitution and development workflow scripts
- Add constitution.md template for documenting core principles and governance
- Add check-prerequisites.sh script for unified prerequisite validation
- Add common.sh utility functions for bash scripts
- Add create-new-feature.sh script for feature scaffolding
- Add setup-plan.sh script for implementation planning
- Add update-agent-context.sh script for agent context management
- Add agent-file-template.md for standardized agent documentation
- Add checklist-template.md for task tracking
- Add plan-template.md for implementation planning
- Add spec-template.md for feature specifications
- Add tasks-template.md for task breakdown
- Update scan history components with improved data handling and UI consistency
- Update scan types and mock data for enhanced scan tracking
- Update i18n messages for scan history localization
- Establish standardized development workflow and documentation structure
2026-01-22 08:56:22 +08:00
yyhuni
ed757d6e14 feat(engineschema): add JSON schema validation and migrate subdomain discovery schema
- Add new engineschema package with schema validation utilities for engine configs
- Implement Validate() function to validate config maps against JSON schemas
- Implement ValidateYAML() function to validate YAML blobs with nested engine support
- Add schema caching with mutex synchronization for performance
- Migrate subdomain_discovery.schema.json from server/configs/engines to server/internal/engineschema
- Enhance schema with $id, x-engine, and x-engine-version metadata fields
- Add conditional validation (if/then) for bruteforce tool enabled state
- Add additionalProperties: false constraints to enforce strict schema validation
- Add jsonschema/v5 dependency to server and worker modules
- Update schema-gen tool to generate schemas in new location
- Regenerate subdomain discovery schema with enhanced validation rules
- Update documentation generation timestamp
2026-01-21 22:00:23 +08:00
yyhuni
2aa1afbabf chore(docker): add server Dockerfile and update subdomain discovery paths
- Add new server/Dockerfile for Go backend containerization with multi-stage build
- Update docker-compose.dev.yml to include server service with database and Redis dependencies
- Migrate Sublist3r tool path from /usr/local/share to /opt/orbit-tools/share for consistency
- Add legacy notice to docker/worker/Dockerfile clarifying it's for old Python executor
- Update subdomain discovery documentation with RFC3339 timestamp format
- Update template parsing test to reflect new tool path location
- Consolidate development environment configuration with all required services
2026-01-21 10:38:57 +08:00
yyhuni
35ac64db57 Merge branch 'feature/directory-sorting-demo' into feature/go-backend
Integrate frontend refactoring changes including:
- Dashboard animation optimizations
- Login flow enhancements
- Orbit rebranding updates
2026-01-20 21:24:52 +08:00
yyhuni
b4bfab92e3 fix(doc-gen): update timestamp format to RFC3339 standard
- Change timestamp format from "2006-01-02" to time.RFC3339 constant
- Ensures generated documentation includes full ISO 8601 timestamp with timezone
- Improves consistency with standard time formatting practices
2026-01-20 21:18:53 +08:00
yyhuni
72210c42d0 style(worker): format subdomain discovery constants and reorder tool definitions
- Align constant assignments with consistent spacing for improved readability
- Reorder tool name constants alphabetically for better maintainability
- Move toolSubfinder constant to end of tool definitions list
- Standardize formatting across stage and tool constant declarations
2026-01-20 21:15:08 +08:00
yyhuni
91aaf7997f feat(worker): implement workflow code generation and enhance subdomain discovery
- Add code generation tools (const-gen, doc-gen, schema-gen) to automate workflow metadata and documentation
- Implement config key mapper for dynamic template parameter mapping and validation
- Add comprehensive test coverage for command builder, template loader, and runner components
- Enhance subdomain discovery workflow with recon stage replacing passive stage for better reconnaissance
- Add subdomain result parsing and writing utilities for output handling
- Implement batch sender tests and improve server client reliability
- Add CI workflow to validate generated files are up to date before builds
- Convert YAML engine config to JSON schema for better validation and IDE support
- Add extensive test data fixtures for template validation edge cases
- Update Makefile and development scripts for improved build and test workflows
- Generate auto-documentation for subdomain discovery configuration reference
- Improve code maintainability through automated generation of constants and schemas
2026-01-20 21:09:55 +08:00
yyhuni
32e3179d58 refactor(frontend): optimize dashboard animations and extract dashboard data prefetch logic
- Remove "use client" directive from dashboard page and convert to server component
- Replace manual fade-in animation state with CSS animation class `animate-dashboard-fade-in`
- Extract dashboard data prefetch logic into reusable `prefetchDashboardData` callback in login page
- Parallelize login verification and bundle prefetch operations for faster execution
- Implement dynamic import for Monaco Editor with loading state to reduce bundle size (~2MB)
- Fix dependency array in template content effect to include full `templateContent` object
- Add `tree-node-item` class with `content-visibility: auto` for long list rendering optimization
- Simplify login flow by reusing extracted prefetch function to reduce code duplication
- Improves perceived performance by reducing animation overhead and optimizing bundle loading
2026-01-20 08:42:02 +08:00
yyhuni
487f7c84b5 fix(frontend): add null checks to PixelBlast renderer initialization
- Add renderer null check in setSize function to prevent errors during initialization
- Add renderer validation before composer.setSize call to ensure renderer exists
- Add null check in mapToPixels function to return safe default values when renderer is unavailable
- Add renderer existence check before calling renderer.render in animation loop
- Improve robustness of Three.js renderer lifecycle management to prevent runtime errors
2026-01-20 08:02:01 +08:00
yyhuni
b2cc83f569 feat(frontend): optimize login flow with dashboard data preloading and enhanced animations
- Implement dashboard data prefetching on successful login to eliminate loading delays
- Add blur transition effect to dashboard fade-in animation for smoother visual experience
- Replace login success splash screen logic with efficient data warming strategy
- Prefetch critical dashboard queries (asset statistics, scans, vulnerabilities) before navigation
- Prime auth cache to prevent full-screen loading state on dashboard entry
- Add pixel animation first-frame detection to coordinate boot splash timing
- Refactor login state management to use refs and callbacks for better control flow
- Update dashboard page transition to use will-change optimization for better performance
- Remove hardcoded login success delay constants in favor of data-driven navigation
- Improve user experience by seamlessly transitioning from login to fully-loaded dashboard
2026-01-19 23:49:16 +08:00
yyhuni
f854cf09be feat(frontend): add login success splash screen and dashboard fade-in animation
- Add "use client" directive to dashboard page for client-side state management
- Implement fade-in animation on dashboard page load using opacity transition
- Add success state tracking to login page with configurable delay timers
- Create separate boot screen output for successful authentication flow
- Add success prop to LoginBootScreen component to display auth success messages
- Define new constants for login success delay (1200ms) and fade duration (500ms)
- Update boot screen to conditionally render success or standard boot lines
- Enhance user experience with visual feedback during authentication completion
2026-01-19 22:31:16 +08:00
yyhuni
7e1c2c187a chore(skills): add Vercel React best practices guidelines for agents
- Add comprehensive Vercel React best practices skill documentation across .agent, .codex, and .gemini directories
- Include 50+ rule files covering async patterns, bundle optimization, client-side performance, and server-side rendering
- Add SKILL.md and AGENTS.md metadata files for skill configuration and agent integration
- Organize rules into categories: advanced patterns, async handling, bundle optimization, client optimization, JavaScript performance, rendering optimization, re-render prevention, and server-side caching
- Provide standardized guidelines for performance optimization and best practices across multiple AI agent platforms
2026-01-19 20:14:08 +08:00
yyhuni
4abb259ca0 feat(frontend): rebrand to Orbit and add login boot splash screen
- Replace "Star Patrol ASM Platform" branding with "Orbit ASM Platform" throughout
- Add SVG icon support and remove favicon.ico in favor of icon.svg
- Create new LoginBootScreen component for boot splash animation
- Implement boot phase state management (entering, visible, leaving, hidden)
- Add smooth transitions and animations for login page overlay
- Update metadata icons configuration to use SVG format
- Add glitch reveal animations and jitter effects to globals.css
- Enhance login page UX with minimum splash duration and fade transitions
- Update English and Chinese translations for new branding
- Improve system logs mock data structure
- Update package.json dependencies and configuration
- Ensure splash screen displays before auth check completes and redirect occurs
2026-01-19 11:10:02 +08:00
yyhuni
bbef6af000 fix(frontend): update filter examples to use correct wildcard syntax
- Replace wildcard patterns with asterisks (*) with trailing slash notation
- Update directories filter example from "/api/*" to "/api/"
- Update endpoints filter example from "/api/*" to "/api/"
- Update IP addresses filter example from "192.168.1.*" to "192.168.1."
- Update subdomains filter example from "*.test.com" to ".test.com"
- Update vulnerabilities filter example from "/api/*" to "/api/"
- Update websites filter example from "/api/*" to "/api/"
- Standardize filter syntax across all data table components for consistency
2026-01-18 21:41:30 +08:00
yyhuni
ba0864ed16 feat(target): add help tooltip for directories tab and update translations
- Import HelpCircle icon from lucide-react for help indicator
- Import Tooltip components for displaying contextual help information
- Restructure navigation layout to support help tooltip alongside tabs
- Add conditional tooltip display when directories tab is active
- Add directoriesHelp translation key to English messages
- Add directoriesHelp translation key to Chinese messages
- Improve UX by providing contextual guidance for directories functionality
2026-01-18 10:23:33 +08:00
yyhuni
f54827829a feat(dashboard): add vulnerability review status tracking and severity column
- Add review status indicator (pending/reviewed) to recent vulnerabilities table with visual badges
- Display severity column in vulnerability table for better visibility
- Import Circle and CheckCircle2 icons from lucide-react for status indicators
- Add tooltip translations for "reviewed" and "pending" status labels
- Update mock vulnerability data with isReviewed property for all entries
- Implement conditional styling for pending (blue) and reviewed (muted) status badges
- Enhance table layout to show vulnerability severity alongside review status
2026-01-18 08:58:18 +08:00
yyhuni
170021130c feat(worker): implement subdomain discovery workflow stages with wildcard detection
- Add stage_bruteforce.go with bruteforce subdomain enumeration logic
- Add stage_passive.go with passive reconnaissance stage implementation
- Add stage_merge.go with file merging and wildcard domain detection
- Add stages.go with stage orchestration and utility functions
- Update workflow.go to integrate new stages into discovery pipeline
- Implement wildcard detection with sampling and expansion ratio analysis
- Add deduplication logic during file merging to reduce redundant entries
- Implement parallel command execution for bruteforce operations
- Add wordlist management with local caching from server
- Include comprehensive logging and error handling throughout stages
2026-01-17 23:18:28 +08:00
yyhuni
b540f69152 feat(worker): implement subdomain discovery workflow and enhance validation
- Rename IsSubdomainMatchTarget to IsSubdomainOfTarget for clarity
- Add subdomain discovery workflow with template loader and helpers
- Implement workflow registry for managing scan workflows
- Add domain validator package for input validation
- Create wordlist server component for managing DNS resolver lists
- Add template loader activity for dynamic template management
- Implement worker configuration module with environment setup
- Update worker dependencies to include projectdiscovery/utils and govalidator
- Consolidate workspace directory configuration (WORKSPACE_DIR replaces RESULTS_BASE_PATH)
- Update seed generator to use standardized bulk-create API endpoint
- Update all service layer calls to use renamed validation function
2026-01-17 21:15:02 +08:00
yyhuni
d7f1e04855 chore: add server/.env to .gitignore and remove from git tracking 2026-01-17 08:25:45 +08:00
yyhuni
68ad18e6da 更名orbit 2026-01-16 09:03:20 +08:00
yyhuni
a7542d4a34 改名后端成server 2026-01-15 16:19:00 +08:00
yyhuni
6f02d9f3c5 feat(api): standardize API endpoints and update data generation logic
- Rename IP address endpoints from `/ip-addresses/` to `/host-ports` for consistency
- Update vulnerability endpoints from `/assets/vulnerabilities/` to `/vulnerabilities/`
- Remove trailing slashes from API endpoint paths for standardization
- Remove explicit `type` field from target generation in seed data
- Enhance website generation with deduplication logic and attempt limiting
- Add default admin user seed data to database initialization migration
- Improve data generator to prevent infinite loops and handle unique URL combinations
- Align frontend service calls with updated backend API structure
2026-01-15 13:02:26 +08:00
yyhuni
794846ca7a feat(backend): enhance vulnerability schema and add target validation for snapshots
- Expand vulnerability and vulnerability_snapshot table column sizes for better data handling
* Change url column from VARCHAR(2000) to TEXT for unlimited length
* Increase vuln_type from VARCHAR(100) to VARCHAR(200)
* Increase source from VARCHAR(50) to VARCHAR(100)
- Add input validation constraints to vulnerability DTOs
* Add max=200 binding constraint to VulnType field
* Add max=100 binding constraint to Source field
- Implement consistent target ID validation across snapshot handlers
* Add ErrTargetMismatch error handling in subdomain_snapshot handler
* Add ErrTargetMismatch error handling in website_snapshot handler
* Replace generic error strings with ErrTargetMismatch constant in services
- Improve error handling consistency by using defined error types instead of generic error messages
2026-01-15 12:33:19 +08:00
yyhuni
5eea7b2621 feat(backend): add input validation and default value initialization for models
- Add Content-Type validation in BindJSON to enforce application/json requests
- Implement BeforeCreate hooks for array and JSONB field initialization across models
* Endpoint and EndpointSnapshot: initialize Tech and MatchedGFPatterns arrays
* Scan: initialize EngineIDs, ContainerIDs arrays and StageProgress JSONB
* Vulnerability and VulnerabilitySnapshot: initialize RawOutput JSONB
* Website and WebsiteSnapshot: initialize Tech array
- Add ErrTargetMismatch error handling in snapshot handlers
* DirectorySnapshot, HostPortSnapshot, ScreenshotSnapshot handlers now validate targetId matches scan's target
- Enhance target validation in filter and validator packages
- Improve service layer validation for subdomain, website, and host port snapshots
- Prevent null/nil values in database by ensuring proper default initialization
2026-01-15 12:21:35 +08:00
yyhuni
069527a7f1 feat(backend): implement vulnerability and screenshot snapshot APIs with directories tab reorganization
- Add vulnerability snapshot DTO, handler, repository, and service layer with comprehensive test coverage
- Add screenshot snapshot DTO, handler, repository, and service layer for snapshot management
- Reorganize directories tab from secondary assets navigation to primary navigation in scan history and target layouts
- Update frontend navigation to include FolderSearch icon for directories tab with badge count display
- Add i18n translations for directories tab in English and Chinese messages
- Implement seed data generation tools with Python API client for testing and data population
- Add data generator, error handler, and progress tracking utilities for seed API
- Update target validator to support new snapshot-related validations
- Refactor organization and vulnerability handlers to support snapshot operations
- Add integration tests and property-based tests for vulnerability snapshot functionality
- Update Go module dependencies to support new snapshot features
2026-01-15 10:25:34 +08:00
yyhuni
e542633ad3 refactor(backend): consolidate migration files and restructure host port entities
- Remove seed data generation command (cmd/seed/main.go)
- Consolidate database migrations into single init schema file
- Rename ip_address DTO to host_port for consistency
- Add host_port_snapshot DTO and model for snapshot tracking
- Rename host_port handler and repository files for clarity
- Implement host_port_snapshot service layer with CRUD operations
- Update website_snapshot service to work with new host_port structure
- Enhance terminal login UI with focus state tracking and Tab key navigation
- Update docker-compose configuration for development environment
- Refactor directory and website snapshot DTOs for improved data structure
- Add comprehensive test coverage for model and handler changes
- Simplify database schema by consolidating related migrations into single initialization file
2026-01-14 18:04:16 +08:00
yyhuni
e8a9606d3b 优化性能 2026-01-14 16:41:35 +08:00
yyhuni
dc2e1e027d 完成endpoint website subdomain directory快照表api 2026-01-14 16:38:20 +08:00
yyhuni
b1847faa3a feat(frontend): add throttled ripple effect on mouse move to PixelBlast
- Add enableRipples prop to PixelBlast component for conditional ripple control
- Implement throttled ripple triggering on pointer move events (150ms interval)
- Remove separate pointerdown event listener and consolidate ripple logic
- Refactor onPointerMove to handle both ripple effects and liquid touch separately
- Improve performance by preventing excessive ripple calculations on rapid movements
2026-01-14 11:42:12 +08:00
yyhuni
e699842492 perf(frontend): optimize login page and animations with memoization and accessibility
- Memoize translations object in login page to prevent unnecessary re-renders
- Add support for prefers-reduced-motion media query in PixelBlast component
- Implement IntersectionObserver and Page Visibility API for intelligent animation pausing
- Limit device pixel ratio based on device type (mobile vs desktop) for better performance
- Add maxPixelRatio parameter to PixelBlast for fine-grained performance control
- Add autoPlay prop to Shuffle component for flexible animation control
- Disable autoPlay on Shuffle text animations in terminal login for better UX
- Add accessibility label to PixelBlast container when reduced motion is enabled
- Improve mobile performance by capping pixel ratio to 1.5 on mobile devices
- Respect user accessibility preferences while maintaining visual quality on desktop
2026-01-14 11:33:11 +08:00
yyhuni
08a4807bef feat(frontend): enhance terminal login UI with improved styling and i18n shortcuts
- Update PixelBlast animation with increased pixel size (6.5) and speed (0.35)
- Replace semantic color tokens with explicit zinc color palette for better visual consistency
- Add keyboard shortcuts translations to support multiple languages (en, zh)
- Implement i18n for all terminal UI labels: submit, cancel, clear, start/end actions
- Update terminal header and content styling with zinc-700 borders and zinc-100 text
- Enhance keyboard shortcuts hint display with localized action labels
- Improve text color hierarchy using zinc-400, zinc-500, and zinc-600 variants
2026-01-14 10:58:12 +08:00
yyhuni
191ff9837b feat(frontend): redesign login page with terminal UI and pixel blast animation
- Replace traditional card-based login form with immersive terminal-style interface
- Add PixelBlast animated background component for cyberpunk aesthetic
- Implement TerminalLogin component with typewriter and terminal effects
- Add new animation components: FaultyTerminal, PixelBlast, Shuffle with CSS modules
- Add gravity-stars background animation component from animate-ui
- Add terminal cursor blink animation to global styles
- Update login page translations to support terminal UI prompts and messages
- Replace Lottie animation with dynamic WebGL-based PixelBlast component
- Add dynamic imports to prevent SSR issues with WebGL rendering
- Update component registry to include @magicui and @react-bits registries
- Refactor login form state management to use async/await pattern
- Add fingerprint meta tag for search engine identification (FOFA/Shodan)
- Improve visual hierarchy with relative z-index layering for background and content
2026-01-14 10:48:41 +08:00
yyhuni
679dff9037 refactor(frontend): unify filter UI components and enhance smart filtering
- Replace DropdownMenu with Select component for severity filtering across data tables
- Add Filter icon from lucide-react to filter triggers for consistent visual design
- Update SelectTrigger width from fixed pixels to auto for responsive layout
- Integrate SmartFilterInput component into vulnerabilities data table
- Refactor severity filter options to use object structure with translated labels
- Consolidate filter UI patterns across organization targets, scan history, and vulnerabilities tables
- Register @animate-ui component registry in components.json
- Improve filter UX with consistent icon usage and flexible sizing
2026-01-14 09:51:35 +08:00
yyhuni
ce4330b628 refactor(frontend): centralize severity styling configuration
- Extract severity color and style definitions into dedicated severity-config module
- Create SEVERITY_STYLES constant with unified badge styling for all severity levels
- Create SEVERITY_COLORS constant for chart visualization consistency
- Add getSeverityStyle() helper function for dynamic severity badge generation
- Add SEVERITY_CARD_STYLES and SEVERITY_ICON_BG constants for notification styling
- Update dashboard components to use centralized severity configuration
- Update fingerprint columns to use getSeverityStyle() helper
- Update notification drawer to reference centralized severity styles
- Update search result cards to use centralized configuration
- Update vulnerability components to import from severity-config module
- Eliminate duplicate severity styling definitions across multiple components
- Improve maintainability by having single source of truth for severity styling
2026-01-14 09:05:14 +08:00
yyhuni
4ce6b148f8 feat(frontend): enhance vulnerability review status display with icons
- Add Circle and CheckCircle2 icons from lucide-react for visual status indicators
- Update reviewStatus column sizing (100px size, 90-110px range) for better layout
- Implement icon rendering: Circle for pending status, CheckCircle2 for reviewed
- Enhance Badge styling with improved hover states and ring effects
- Add gap spacing between icon and text in status badge
- Refactor status logic to use isPending variable for clearer code
- Update Chinese translations for review action labels to be more descriptive
- Improve visual feedback with conditional styling based on review status
- Maintain cursor pointer behavior only when onToggleReview callback is available
2026-01-14 08:43:47 +08:00
yyhuni
a89f775ee9 完成漏洞的review,scan的基本curd 2026-01-14 08:21:46 +08:00
yyhuni
e3003f33f9 完成漏洞的review,scan的基本curd 2026-01-14 08:21:34 +08:00
yyhuni
3760684b64 feat: add vulnerability review status feature
- Add is_reviewed and reviewed_at fields to vulnerability table
- Add PATCH /api/vulnerabilities/:id/review and /unreview endpoints
- Add POST /api/vulnerabilities/bulk-review and /bulk-unreview endpoints
- Add isReviewed filter parameter to list APIs
- Update frontend with review status indicator, filter tabs, and bulk actions
- Add i18n translations for review status
2026-01-13 19:53:12 +08:00
yyhuni
bfd7e11d09 perf(backend): optimize database seeding with batch inserts
- Replace individual Create() calls with CreateInBatches() for organizations, targets, and websites to reduce database round trips
- Build all records in memory before batch insertion instead of inserting one-by-one
- Implement chunked batch insert for organization-target links to handle large datasets efficiently
- Add ON CONFLICT DO NOTHING clause for website creation to handle duplicates gracefully
- Use strings.Join() for efficient SQL query construction in bulk insert operations
- Improve seeding performance by reducing database transactions from O(n) to O(n/batch_size)
- Add missing imports (strings, clause) required for batch operations
2026-01-13 18:57:18 +08:00
yyhuni
f758feb0d0 完善漏洞api 2026-01-13 18:46:43 +08:00
yyhuni
8798eed337 feat(backend,frontend): implement wordlist management and engine patch endpoint
- Add wordlist management system with create, list, delete, and content operations
- Implement wordlist repository, service, and handler layers
- Add wordlist DTO models for API requests and responses
- Create wordlist storage configuration with base path setting
- Add PATCH endpoint for partial engine updates alongside existing PUT endpoint
- Implement PatchEngineRequest DTO for optional field updates
- Add wordlist routes: POST/GET/DELETE for management, GET/PUT for content operations
- Remove redundant toast notifications from engine edit dialog (handled by hook)
- Configure storage settings in application config with environment variable support
- Initialize wordlist service and handler in main server setup
2026-01-13 18:03:36 +08:00
yyhuni
bd1e25cfd5 添加截图创建校验 2026-01-13 17:42:19 +08:00
yyhuni
d775055572 完成截图api 2026-01-13 17:35:57 +08:00
yyhuni
00dfad60b8 完成target资产统计count 2026-01-13 16:55:37 +08:00
yyhuni
a5c48fe4d4 feat(frontend,backend): implement IP address management and export functionality
- Add IP address DTO, handler, service, and repository layers in Go backend
- Implement IP address bulk delete endpoint at /ip-addresses/bulk-delete/
- Add IP address export endpoint with optional IP filtering by target
- Simplify IP address hosts column display using ExpandableCell component
- Update IP address export to support filtering selected IPs for download
- Add error handling and toast notifications for export operations
- Internationalize IP address column labels and tooltips in Chinese
- Update IP address service to support filtered exports with comma-separated IPs
- Add host-port mapping seeding for test data generation
- Refactor scope filter and repository queries to support IP address operations
2026-01-13 16:42:57 +08:00
yyhuni
85c880731c feat(frontend): internationalize data table and website columns
- Add Chinese translations for common column labels (name, description, status, actions, type)
- Translate vulnerability column headers (severity, source, vulnType, url, createdAt)
- Translate organization and target column headers to Chinese
- Translate subdomain and endpoint column headers with full Chinese localization
- Add comprehensive website column translations including statusCode, technologies, contentLength
- Translate directory and scheduledScan column headers to Chinese
- Update UnifiedDataTable to use i18n for "Columns" button text via tDataTable("showColumns")
- Fix websites-view to use correct translation key "website.statusCode" instead of "common.status"
- Ensure consistent terminology across all data table views for better user experience
2026-01-13 10:16:43 +08:00
yyhuni
c6b6507412 feat(frontend): internationalize tab labels in scan history and target layouts
- Replace hardcoded tab labels with i18n translation keys in scan history layout
- "Websites" → {t("tabs.websites")}
- "Subdomains" → {t("tabs.subdomains")}
- "IPs" → {t("tabs.ips")}
- "URLs" → {t("tabs.urls")}
- "Directories" → {t("tabs.directories")}
- Replace hardcoded tab labels with i18n translation keys in target layout
- Apply same translation key replacements across all tab triggers
- Add new tab translation keys to English messages (en.json)
- tabs.websites, tabs.subdomains, tabs.ips, tabs.urls, tabs.directories
- Add new tab translation keys to Chinese messages (zh.json)
- Standardize terminology: "网站" → "站点", "端点" → "URL"
- Update related dashboard and stat card translations for consistency
- Ensures consistent multilingual support across scan history and target management interfaces
2026-01-13 09:58:34 +08:00
yyhuni
af457dc44c feat(frontend,backend): implement directory, endpoint, and subdomain management APIs
- Remove words, lines, and duration fields from directory model and UI components
- Simplify directory columns by removing unnecessary metrics from table display
- Add directory, endpoint, and subdomain DTOs with proper validation and pagination
- Implement handlers for directory, endpoint, and subdomain CRUD operations
- Create repository layer for directory, endpoint, and subdomain data access
- Add service layer for directory, endpoint, and subdomain business logic
- Update API routes to use standalone endpoints (/directories, /endpoints, /subdomains)
- Fix subdomain bulk-create payload to use 'names' field instead of 'subdomains'
- Add database migration to drop unused directory_words and directory_lines tables
- Update seed data generation to support websites, endpoints, and directories per target
- Add target validator tests for improved test coverage
- Refactor subdomain service to support new API structure
2026-01-13 09:47:34 +08:00
yyhuni
9e01a6aa5e fix(frontend,backend): move bulk-delete endpoint to standalone websites route
- Move bulk-delete endpoint from `/targets/:id/websites/bulk-delete` to `/websites/bulk-delete`
- Update frontend WebsiteService to use new standalone endpoint path
- Update Go backend router configuration to register bulk-delete under standalone websites routes
- Update handler documentation to reflect correct endpoint path
- Simplifies API structure by treating bulk operations as standalone website operations rather than target-scoped
2026-01-12 22:16:34 +08:00
yyhuni
ed80772e6f feat(frontend,backend): implement website management and i18n for bulk operations
- Add website service layer with CRUD operations and filtering support
- Implement website handler with complete API endpoints
- Add website repository with database operations and query optimization
- Create website DTO for API request/response serialization
- Implement CSV export functionality for asset data
- Add scope filtering package for dynamic query building with tests
- Create database migrations for schema initialization and GIN indexes
- Migrate bulk add dialog to use i18n translations instead of hardcoded strings
- Update all frontend hooks to support pagination and filtering parameters
- Refactor organization and target services with improved error handling
- Add seed command for database initialization with sample data
- Update frontend messages (en.json, zh.json) with bulk operation translations
- Improve API client with better error handling and request formatting
- Add database migration runner to backend initialization
- Update go.mod and go.sum with new dependencies
2026-01-12 22:10:08 +08:00
yyhuni
a22af21dcb feat(frontend,backend): optimize data fetching and add database seeding
- Add database seeding utility (cmd/seed/main.go) to generate test data for organizations and targets
- Implement conditional query execution in useTargets hook with enabled option to prevent unnecessary requests
- Reduce page size from 50 to 20 in scheduled scan dialog for better performance
- Update target DTO and handler to support improved query filtering
- Enhance target repository with optimized database queries
- Replace generic "Add" button text with localized "Add Target" text in target views
- Remove redundant addButtonText prop from organization detail view
- Improve code formatting and add explanatory comments for data fetching logic
- These changes reduce unnecessary API calls on page load and provide better test data management for development
2026-01-12 18:43:16 +08:00
yyhuni
8de950a7a5 feat(organization): refactor target creation flow and fix target count queries
- Replace useBatchCreateTargets hook with direct service call in AddOrganizationDialog to avoid double toast notifications
- Simplify dialog state management by using isCreatingTargets boolean instead of mutation pending state
- Consolidate form reset and dialog close logic to execute after both organization and targets are created
- Fix target count queries in OrganizationRepository to exclude soft-deleted targets using INNER JOIN with deleted_at check
- Update FindByIDWithCount and FindAll methods to properly filter out deleted targets from count calculations
- Handle 204 No Content responses in batchCreateTargets service by returning default success response
2026-01-12 18:17:44 +08:00
yyhuni
9db84221e9 完成部分组织,目标相关后端api
前端改名项目为星巡
2026-01-12 17:59:37 +08:00
yyhuni
0728f3c01d feat(go-backend): add database auto-migration and fix Website model naming
- Add comprehensive database auto-migration in main.go with all models organized by dependency order
- Include core models (Organization, User, Target, ScanEngine, WorkerNode, etc.)
- Include scan-related models (Scan, ScanInputTarget, ScanLog, ScheduledScan)
- Include asset models (Subdomain, HostPortMapping, Website, Endpoint, Directory, Screenshot, Vulnerability)
- Include snapshot models for all asset types
- Include statistics and authentication models
- Rename WebSite struct to Website for consistency with Go naming conventions
- Update TableName method to reflect Website naming
- Add migration logging for debugging and monitoring purposes
2026-01-11 22:30:36 +08:00
yyhuni
4aa7b3d68a feat(go-backend): implement complete API layer with handlers, services, and repositories
- Add DTOs for user, organization, target, engine, pagination, and response handling
- Implement repository layer for user, organization, target, and engine entities
- Implement service layer with business logic for all core modules
- Implement HTTP handlers for user, organization, target, and engine endpoints
- Add complete CRUD API routes with soft delete support for organizations and targets
- Add environment configuration file with database, Redis, and logging settings
- Add docker-compose.dev.yml for PostgreSQL and Redis development dependencies
- Add comprehensive README.md with migration progress, API endpoints, and tech stack
- Update main.go to wire repositories, services, and handlers with dependency injection
- Update config.go to support .env file loading with environment variable priority
- Update database.go to initialize all repositories and services
2026-01-11 22:07:27 +08:00
yyhuni
3946a53337 refactor(go-backend): switch from Django pbkdf2 to bcrypt
- Simplify password.go to use bcrypt (standard Go approach)
- Remove Django password compatibility (not needed for fresh deployment)
- Update auth_handler to use VerifyPassword()
- All tests passing
2026-01-11 20:58:53 +08:00
yyhuni
c94fe1ec4b feat(go-backend): implement JWT authentication
- Add JWT token generation and validation (internal/auth/jwt.go)
- Add Django-compatible password verification (internal/auth/password.go)
- Add auth middleware for protected routes (internal/middleware/auth.go)
- Add auth handler with login, refresh, me endpoints (internal/handler/auth_handler.go)
- Add JWT config (secret, access/refresh expire times)
- Register auth routes in main.go
- All tests passing

API endpoints:
- POST /api/auth/login - User login
- POST /api/auth/refresh - Refresh access token
- GET /api/auth/me - Get current user (protected)
2026-01-11 20:55:59 +08:00
yyhuni
6dea525527 feat(go-backend): add indexes and unique constraints to all models
- Add index tags for query optimization (idx_xxx)
- Add uniqueIndex tags for unique constraints
- Add composite unique indexes (e.g., unique_subdomain_name_target)
- Update Organization/Target to many-to-many relationship
- All models now ready for GORM AutoMigrate
- All tests passing
2026-01-11 20:47:25 +08:00
yyhuni
5b0416972a feat(go-backend): complete all Go models
- Add scan-related models: ScanLog, ScanInputTarget, ScheduledScan, SubfinderProviderSettings
- Add engine models: Wordlist, NucleiTemplateRepo
- Add notification models: Notification, NotificationSettings
- Add config model: BlacklistRule
- Add statistics models: AssetStatistics, StatisticsHistory
- Add auth models: User (auth_user), Session (django_session)
- Add shopspring/decimal dependency for Vulnerability model
- Update model_test.go with all 33 model table name tests
- All tests passing
2026-01-11 20:29:11 +08:00
yyhuni
5345a34cbd 重构:去除prefect 2026-01-11 19:31:47 +08:00
github-actions[bot]
3ca56abc3e chore: bump version to v1.5.12-dev 2026-01-11 09:22:30 +00:00
yyhuni
9703add22d feat(nuclei): support configurable Nuclei templates repository with Gitee mirror
- Add NUCLEI_TEMPLATES_REPO_URL setting to allow runtime configuration of template repository URL
- Refactor install.sh mirror parameter handling to use boolean flag instead of URL string
- Replace hardcoded GitHub repository URL with Gitee mirror option for faster downloads in mainland China
- Update environment variable configuration to persist Nuclei repository URL in .env file
- Improve shell script variable quoting and conditional syntax for better reliability
- Simplify mirror detection logic by using USE_MIRROR boolean flag throughout installation process
- Add support for automatic Gitee mirror selection when --mirror flag is enabled
2026-01-11 17:19:09 +08:00
github-actions[bot]
f5a489e2d6 chore: bump version to v1.5.11-dev 2026-01-11 08:54:04 +00:00
yyhuni
d75a3f6882 fix(task_distributor): adjust high load wait parameters and improve timeout handling
- Increase high load wait interval from 60 to 120 seconds (2 minutes)
- Increase max retries from 10 to 60 to support up to 2 hours total wait time
- Improve timeout message to show actual wait duration in minutes
- Remove duplicate return statement in worker selection logic
- Update notification message to reflect new wait parameters (2 minutes check interval, 2 hours max wait)
- Clean up trailing whitespace in task_distributor.py
- Remove redundant error message from install.sh about missing/incorrect image versions
- Better handling of high load scenarios with clearer logging and user communication
2026-01-11 16:41:05 +08:00
github-actions[bot]
59e48e5b15 chore: bump version to v1.5.10-dev 2026-01-11 08:19:39 +00:00
yyhuni
2d2ec93626 perf(screenshot): optimize memory usage and add URL collection fallback logic
- Add iterator(chunk_size=50) to ScreenshotSnapshot query to prevent BinaryField data caching and reduce memory consumption
- Implement fallback logic in URL collection: WebSite → HostPortMapping → Default URL with priority handling
- Update _collect_urls_from_provider to return tuple with data source information for better logging and debugging
- Add detailed logging to track which data source was used during URL collection
- Improve code documentation with clear return type hints and fallback priority explanation
- Prevents memory spikes when processing large screenshot datasets with binary image data
2026-01-11 16:14:56 +08:00
github-actions[bot]
ced9f811f4 chore: bump version to v1.5.8-dev 2026-01-11 08:09:37 +00:00
yyhuni
aa99b26f50 fix(vuln_scan): use tool-specific parameter names for endpoint scanning
- Add conditional logic to use "input_file" parameter for nuclei tool
- Use "endpoints_file" parameter for other scanning tools
- Improve compatibility with different vulnerability scanning tools
- Ensure correct parameter naming based on tool requirements
2026-01-11 15:59:39 +08:00
yyhuni
8342f196db nuclei加入website扫描为默认 2026-01-11 12:13:27 +08:00
yyhuni
1bd2a6ed88 重构:完成provider 2026-01-11 11:15:59 +08:00
yyhuni
033ff89aee 重构:采用provider提供数据 2026-01-11 10:29:27 +08:00
yyhuni
4284a0cd9a refactor(scan): remove deprecated provider implementations and cleanup
- Delete ListTargetProvider implementation and related tests
- Delete PipelineTargetProvider implementation and related tests
- Remove target_export_service.py unused service module
- Remove test files for common properties validation
- Update engine-preset-selector component in frontend
- Remove sponsor acknowledgment section from README
- Simplify provider architecture by consolidating implementations
2026-01-10 23:53:52 +08:00
yyhuni
943a4cb960 docs(docker): remove default credentials from startup message
- Remove hardcoded default username and password display from docker startup script
- Remove warning message about changing password after first login
- Improve security by not exposing default credentials in startup output
- Simplifies startup message output for cleaner user experience
2026-01-10 11:21:14 +08:00
yyhuni
eb2d853b76 docs: remove emoji symbols from README for better accessibility
- Remove shield emoji (🛡️) from main title
- Replace emoji prefixes in navigation links with plain text anchors
- Remove emoji icons from section headers (🌐, 📚, , 📦, 🤝, 📧, 🎁, , 🙏, ⚠️, 🌟, 📄)
- Replace emoji status indicators (, ⚠️, 🔍, 💡, ) with plain text equivalents
- Remove emoji bullet points and replace with standard formatting
- Simplify documentation for improved readability and cross-platform compatibility
2026-01-10 11:17:43 +08:00
github-actions[bot]
1184c18b74 chore: bump version to v1.5.7 2026-01-10 03:10:45 +00:00
yyhuni
8a6f1b6f24 feat(engine): add --force-sub flag for selective engine config updates
- Add --force-sub command flag to init_default_engine management command
- Allow updating only sub-engines while preserving user-customized full scan config
- Update docker/scripts/init-data.sh to always update full scan engine configuration
- Change docker/server/start.sh to use --force flag for initial engine setup
- Improve update.sh with better logging functions and formatted output
- Add color-coded log functions (log_step, log_ok, log_info, log_warn, log_error)
- Enhance update.sh UI with better visual formatting and warning messages
- Refactor error messages and user prompts for improved clarity
- This enables safer upgrades by preserving custom full scan configurations while updating sub-engines
2026-01-10 11:04:42 +08:00
yyhuni
255d505aba refactor(scan): remove deprecated amass engine configurations
- Remove amass_passive engine configuration from subdomain discovery defaults
- Remove amass_active engine configuration from subdomain discovery defaults
- Simplify engine configuration by eliminating unused amass-based scanners
- Streamline the default engine template for better maintainability
2026-01-10 10:51:07 +08:00
github-actions[bot]
d06a9bab1f chore: bump version to v1.5.7-dev 2026-01-10 02:48:21 +00:00
yyhuni
6d5c776bf7 chore: improve version detection and update deployment configuration
- Update version detection to support IMAGE_TAG environment variable for Docker containers
- Add fallback mechanism to check multiple version file paths (/app/VERSION and project root)
- Add IMAGE_TAG environment variable to docker-compose.dev.yml and docker-compose.yml
- Fix frontend access URL in start.sh to include correct port (8083)
- Update upgrade warning message in update.sh to recommend fresh installation with latest code
- Improve robustness of version retrieval with better error handling for missing files
2026-01-10 10:41:36 +08:00
github-actions[bot]
bf058dd67b chore: bump version to v1.5.6-dev 2026-01-10 02:33:15 +00:00
yyhuni
0532d7c8b8 feat(notifications): add WeChat Work (WeChat Enterprise) notification support
- Add wecom notification channel configuration to mock notification settings
- Initialize wecom with disabled state and empty webhook URL by default
- Update notification settings response to include wecom configuration
- Enable WeChat Work as an alternative notification channel alongside Discord
2026-01-10 10:29:33 +08:00
yyhuni
2ee9b5ffa2 更新版本 2026-01-10 10:27:48 +08:00
yyhuni
648a1888d4 增加企业微信 2026-01-10 10:16:01 +08:00
github-actions[bot]
2508268a45 chore: bump version to v1.5.4-dev 2026-01-10 02:10:05 +00:00
1034 changed files with 199901 additions and 11228 deletions

View File

@@ -0,0 +1,45 @@
name: Check Generated Files
on:
workflow_call: # 只在被其他 workflow 调用时运行
permissions:
contents: read
jobs:
check:
runs-on: ubuntu-22.04 # 固定版本,避免 runner 更新导致 CI 行为变化
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.24' # 与 go.mod 保持一致
- name: Generate files for all workflows
working-directory: worker
run: make generate
- name: Check for differences
run: |
if ! git diff --exit-code; then
echo "❌ Generated files are out of date!"
echo "Please run: cd worker && make generate"
echo ""
echo "Changed files:"
git status --porcelain
echo ""
echo "Diff:"
git diff
exit 1
fi
echo "✅ Generated files are up to date"
- name: Run metadata consistency tests
working-directory: worker
run: make test-metadata
- name: Run all tests
working-directory: worker
run: make test

13
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
name: CI
on:
push:
branches: [main, develop]
pull_request:
permissions:
contents: read
jobs:
check-generated:
uses: ./.github/workflows/check-generated-files.yml

169
.gitignore vendored
View File

@@ -1,137 +1,60 @@
# ============================
# 操作系统相关文件
# ============================
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Go
*.exe
*.exe~
*.dll
*.so
*.dylib
*.test
*.out
vendor/
go.work
# ============================
# 前端 (Next.js/Node.js) 相关
# ============================
# 依赖目录
front-back/node_modules/
front-back/.pnpm-store/
# Build artifacts
dist/
build/
bin/
# Next.js 构建产物
front-back/.next/
front-back/out/
front-back/dist/
# 环境变量文件
front-back/.env
front-back/.env.local
front-back/.env.development.local
front-back/.env.test.local
front-back/.env.production.local
# 运行时和缓存
front-back/.turbo/
front-back/.swc/
front-back/.eslintcache
front-back/.tsbuildinfo
# ============================
# 后端 (Python/Django) 相关
# ============================
# Python 虚拟环境
.venv/
venv/
env/
ENV/
# Python 编译文件
*.pyc
*.pyo
*.pyd
__pycache__/
*.py[cod]
*$py.class
# Django 相关
backend/db.sqlite3
backend/db.sqlite3-journal
backend/media/
backend/staticfiles/
backend/.env
backend/.env.local
# Python 测试和覆盖率
.pytest_cache/
.coverage
htmlcov/
*.cover
.hypothesis/
# ============================
# 后端 (Go) 相关
# ============================
# 编译产物
backend/bin/
backend/dist/
backend/*.exe
backend/*.exe~
backend/*.dll
backend/*.so
backend/*.dylib
# 测试相关
backend/*.test
backend/*.out
backend/*.prof
# Go workspace 文件
backend/go.work
backend/go.work.sum
# Go 依赖管理
backend/vendor/
# ============================
# IDE 和编辑器相关
# ============================
# IDE
.vscode/
.idea/
.cursor/
.claude/
.kiro/
.playwright-mcp/
*.swp
*.swo
*~
.DS_Store
# ============================
# Docker 相关
# ============================
docker/.env
docker/.env.local
# SSL 证书和私钥(不应提交)
docker/nginx/ssl/*.pem
docker/nginx/ssl/*.key
docker/nginx/ssl/*.crt
# ============================
# 日志文件和扫描结果
# ============================
# Environment
.env
.env.local
.env.*.local
**/.env
**/.env.local
**/.env.*.local
*.log
logs/
results/
.venv/
# 开发脚本运行时文件(进程 ID 和启动日志)
backend/scripts/dev/.pids/
# Testing
coverage.txt
*.coverprofile
.hypothesis/
# ============================
# 临时文件
# ============================
# Temporary files
*.tmp
tmp/
temp/
.cache/
HGETALL
KEYS
vuln_scan/input_endpoints.txt
open-in-v0
.kiro/
.claude/
.specify/
# AI Assistant directories
codex/
openspec/
specs/
AGENTS.md
WARP.md
.opencode/
# SSL certificates
docker/nginx/ssl/*.pem
docker/nginx/ssl/*.key
docker/nginx/ssl/*.crt

340
README.md
View File

@@ -1,340 +0,0 @@
<h1 align="center">XingRin - 星环</h1>
<p align="center">
<b>🛡️ 攻击面管理平台 (ASM) | 自动化资产发现与漏洞扫描系统</b>
</p>
<p align="center">
<a href="https://github.com/yyhuni/xingrin/stargazers"><img src="https://img.shields.io/github/stars/yyhuni/xingrin?style=flat-square&logo=github" alt="GitHub stars"></a>
<a href="https://github.com/yyhuni/xingrin/network/members"><img src="https://img.shields.io/github/forks/yyhuni/xingrin?style=flat-square&logo=github" alt="GitHub forks"></a>
<a href="https://github.com/yyhuni/xingrin/issues"><img src="https://img.shields.io/github/issues/yyhuni/xingrin?style=flat-square&logo=github" alt="GitHub issues"></a>
<a href="https://github.com/yyhuni/xingrin/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-PolyForm%20NC-blue?style=flat-square" alt="License"></a>
</p>
<p align="center">
<a href="#-功能特性">功能特性</a> •
<a href="#-全局资产搜索">资产搜索</a> •
<a href="#-快速开始">快速开始</a> •
<a href="#-文档">文档</a> •
<a href="#-反馈与贡献">反馈与贡献</a>
</p>
<p align="center">
<sub>🔍 关键词: ASM | 攻击面管理 | 漏洞扫描 | 资产发现 | 资产搜索 | Bug Bounty | 渗透测试 | Nuclei | 子域名枚举 | EASM</sub>
</p>
---
## 🌐 在线 Demo
**[https://xingrin.vercel.app/](https://xingrin.vercel.app/)**
> ⚠️ 仅用于 UI 展示,未接入后端数据库
---
<p align="center">
<b>🎨 现代化 UI </b>
</p>
<p align="center">
<img src="docs/screenshots/light.png" alt="Light Mode" width="24%">
<img src="docs/screenshots/bubblegum.png" alt="Bubblegum" width="24%">
<img src="docs/screenshots/cosmic-night.png" alt="Cosmic Night" width="24%">
<img src="docs/screenshots/quantum-rose.png" alt="Quantum Rose" width="24%">
</p>
## 📚 文档
- [📖 技术文档](./docs/README.md) - 技术文档导航(🚧 持续完善中)
- [🚀 快速开始](./docs/quick-start.md) - 一键安装和部署指南
- [🔄 版本管理](./docs/version-management.md) - Git Tag 驱动的自动化版本管理系统
- [📦 Nuclei 模板架构](./docs/nuclei-template-architecture.md) - 模板仓库的存储与同步
- [📖 字典文件架构](./docs/wordlist-architecture.md) - 字典文件的存储与同步
- [🔍 扫描流程架构](./docs/scan-flow-architecture.md) - 完整扫描流程与工具编排
---
## ✨ 功能特性
### 扫描能力
| 功能 | 状态 | 工具 | 说明 |
|------|------|------|------|
| 子域名扫描 | ✅ | Subfinder, Amass, PureDNS | 被动收集 + 主动爆破,聚合 50+ 数据源 |
| 端口扫描 | ✅ | Naabu | 自定义端口范围 |
| 站点发现 | ✅ | HTTPX | HTTP 探测,自动获取标题、状态码、技术栈 |
| 指纹识别 | ✅ | XingFinger | 2.7W+ 指纹规则,多源指纹库 |
| URL 收集 | ✅ | Waymore, Katana | 历史数据 + 主动爬取 |
| 目录扫描 | ✅ | FFUF | 高速爆破,智能字典 |
| 漏洞扫描 | ✅ | Nuclei, Dalfox | 9000+ POC 模板XSS 检测 |
| 站点截图 | ✅ | Playwright | WebP 高压缩存储 |
### 平台能力
| 功能 | 状态 | 说明 |
|------|------|------|
| 目标管理 | ✅ | 多层级组织,支持域名/IP 目标 |
| 资产快照 | ✅ | 扫描结果对比,追踪资产变化 |
| 黑名单过滤 | ✅ | 全局 + Target 级,支持通配符/CIDR |
| 定时任务 | ✅ | Cron 表达式,自动化周期扫描 |
| 分布式扫描 | ✅ | 多 Worker 节点,负载感知调度 |
| 全局搜索 | ✅ | 表达式语法,多字段组合查询 |
| 通知推送 | ✅ | 企业微信、Telegram、Discord |
| API 密钥管理 | ✅ | 可视化配置各数据源 API Key |
### 扫描流程架构
完整的扫描流程包括子域名发现、端口扫描、站点发现、指纹识别、URL 收集、目录扫描、漏洞扫描等阶段
```mermaid
flowchart LR
START["开始扫描"]
subgraph STAGE1["阶段 1: 资产发现"]
direction TB
SUB["子域名发现<br/>subfinder, amass, puredns"]
PORT["端口扫描<br/>naabu"]
SITE["站点识别<br/>httpx"]
FINGER["指纹识别<br/>xingfinger"]
SUB --> PORT --> SITE --> FINGER
end
subgraph STAGE2["阶段 2: 深度分析"]
direction TB
URL["URL 收集<br/>waymore, katana"]
DIR["目录扫描<br/>ffuf"]
SCREENSHOT["站点截图<br/>playwright"]
end
subgraph STAGE3["阶段 3: 漏洞检测"]
VULN["漏洞扫描<br/>nuclei, dalfox"]
end
FINISH["扫描完成"]
START --> STAGE1
FINGER --> STAGE2
STAGE2 --> STAGE3
STAGE3 --> FINISH
style START fill:#34495e,stroke:#2c3e50,stroke-width:2px,color:#fff
style FINISH fill:#27ae60,stroke:#229954,stroke-width:2px,color:#fff
style STAGE1 fill:#3498db,stroke:#2980b9,stroke-width:2px,color:#fff
style STAGE2 fill:#9b59b6,stroke:#8e44ad,stroke-width:2px,color:#fff
style STAGE3 fill:#e67e22,stroke:#d35400,stroke-width:2px,color:#fff
style SUB fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style PORT fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style SITE fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style FINGER fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style URL fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style DIR fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style SCREENSHOT fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style VULN fill:#f0b27a,stroke:#e67e22,stroke-width:1px,color:#fff
```
详细说明请查看 [扫描流程架构文档](./docs/scan-flow-architecture.md)
### 🖥️ 分布式架构
- **多节点扫描** - 支持部署多个 Worker 节点,横向扩展扫描能力
- **本地节点** - 零配置,安装即自动注册本地 Docker Worker
- **远程节点** - SSH 一键部署远程 VPS 作为扫描节点
- **负载感知调度** - 实时感知节点负载,自动分发任务到最优节点
- **节点监控** - 实时心跳检测CPU/内存/磁盘状态监控
- **断线重连** - 节点离线自动检测,恢复后自动重新接入
```mermaid
flowchart TB
subgraph MASTER["主服务器 (Master Server)"]
direction TB
REDIS["Redis 负载缓存"]
subgraph SCHEDULER["任务调度器 (Task Distributor)"]
direction TB
SUBMIT["接收扫描任务"]
SELECT["负载感知选择"]
DISPATCH["智能分发"]
SUBMIT --> SELECT
SELECT --> DISPATCH
end
REDIS -.负载数据.-> SELECT
end
subgraph WORKERS["Worker 节点集群"]
direction TB
W1["Worker 1 (本地)<br/>CPU: 45% | MEM: 60%"]
W2["Worker 2 (远程)<br/>CPU: 30% | MEM: 40%"]
W3["Worker N (远程)<br/>CPU: 90% | MEM: 85%"]
end
DISPATCH -->|任务分发| W1
DISPATCH -->|任务分发| W2
DISPATCH -->|高负载跳过| W3
W1 -.心跳上报.-> REDIS
W2 -.心跳上报.-> REDIS
W3 -.心跳上报.-> REDIS
```
### 🔎 全局资产搜索
- **多类型搜索** - 支持 Website 和 Endpoint 两种资产类型
- **表达式语法** - 支持 `=`(模糊)、`==`(精确)、`!=`(不等于)操作符
- **逻辑组合** - 支持 `&&` (AND) 和 `||` (OR) 逻辑组合
- **多字段查询** - 支持 host、url、title、tech、status、body、header 字段
- **CSV 导出** - 流式导出全部搜索结果,无数量限制
#### 搜索语法示例
```bash
# 基础搜索
host="api" # host 包含 "api"
status=="200" # 状态码精确等于 200
tech="nginx" # 技术栈包含 nginx
# 组合搜索
host="api" && status=="200" # host 包含 api 且状态码为 200
tech="vue" || tech="react" # 技术栈包含 vue 或 react
# 复杂查询
host="admin" && tech="php" && status=="200"
url="/api/v1" && status!="404"
```
### 📊 可视化界面
- **数据统计** - 资产/漏洞统计仪表盘
- **实时通知** - WebSocket 消息推送
- **通知推送** - 实时企业微信tgdiscard消息推送服务
---
## 📦 快速开始
### 环境要求
- **操作系统**: Ubuntu 20.04+ / Debian 11+
- **系统架构**: AMD64 (x86_64) / ARM64 (aarch64)
- **硬件**: 2核 4G 内存起步20GB+ 磁盘空间
### 一键安装
```bash
# 克隆项目
git clone https://github.com/yyhuni/xingrin.git
cd xingrin
# 安装并启动(生产模式)
sudo ./install.sh
# 🇨🇳 中国大陆用户推荐使用镜像加速(第三方加速服务可能会失效,不保证长期可用)
sudo ./install.sh --mirror
```
> **💡 --mirror 参数说明**
> - 自动配置 Docker 镜像加速(国内镜像源)
> - 加速 Git 仓库克隆Nuclei 模板等)
### 访问服务
- **Web 界面**: `https://ip:8083`
- **默认账号**: admin / admin首次登录后请修改密码
### 常用命令
```bash
# 启动服务
sudo ./start.sh
# 停止服务
sudo ./stop.sh
# 重启服务
sudo ./restart.sh
# 卸载
sudo ./uninstall.sh
```
## 🤝 反馈与贡献
- 💡 **发现 Bug有新想法比如UI设计功能设计等** 欢迎点击右边链接进行提交建议 [Issue](https://github.com/yyhuni/xingrin/issues) 或者公众号私信
## 📧 联系
- 微信公众号: **塔罗安全学苑**
- 微信群去公众号底下的菜单,有个交流群,点击就可以看到了,链接过期可以私信我拉你
<img src="docs/wechat-qrcode.png" alt="微信公众号" width="200">
### 🎁 关注公众号免费领取指纹库
| 指纹库 | 数量 |
|--------|------|
| ehole.json | 21,977 |
| ARL.yaml | 9,264 |
| goby.json | 7,086 |
| FingerprintHub.json | 3,147 |
> 💡 关注公众号回复「指纹」即可获取
## ☕ 赞助支持
如果这个项目对你有帮助谢谢请我能喝杯蜜雪冰城你的star和赞助是我免费更新的动力
<p>
<img src="docs/wx_pay.jpg" alt="微信支付" width="200">
<img src="docs/zfb_pay.jpg" alt="支付宝" width="200">
</p>
### 🙏 感谢以下赞助
| 昵称 | 金额 |
|------|------|
| X闭关中 | ¥88 |
## ⚠️ 免责声明
**重要:请在使用前仔细阅读**
1. 本工具仅供**授权的安全测试**和**安全研究**使用
2. 使用者必须确保已获得目标系统的**合法授权**
3. **严禁**将本工具用于未经授权的渗透测试或攻击行为
4. 未经授权扫描他人系统属于**违法行为**,可能面临法律责任
5. 开发者**不对任何滥用行为负责**
使用本工具即表示您同意:
- 仅在合法授权范围内使用
- 遵守所在地区的法律法规
- 承担因滥用产生的一切后果
## 🌟 Star History
如果这个项目对你有帮助,请给一个 ⭐ Star 支持一下!
[![Star History Chart](https://api.star-history.com/svg?repos=yyhuni/xingrin&type=Date)](https://star-history.com/#yyhuni/xingrin&Date)
## 📄 许可证
本项目采用 [GNU General Public License v3.0](LICENSE) 许可证。
### 允许的用途
- ✅ 个人学习和研究
- ✅ 商业和非商业使用
- ✅ 修改和分发
- ✅ 专利使用
- ✅ 私人使用
### 义务和限制
- 📋 **开源义务**:分发时必须提供源代码
- 📋 **相同许可**:衍生作品必须使用相同许可证
- 📋 **版权声明**:必须保留原始版权和许可证声明
-**责任免除**:不提供任何担保
- ❌ 未经授权的渗透测试
- ❌ 任何违法行为

View File

@@ -1 +1 @@
v1.5.3
v1.5.12-dev

13
agent/.air.toml Normal file
View File

@@ -0,0 +1,13 @@
root = "."
tmp_dir = "tmp"
[build]
cmd = "go build -o ./tmp/agent ./cmd/agent"
bin = "./tmp/agent"
delay = 1000
include_ext = ["go", "tpl", "tmpl", "html"]
exclude_dir = ["tmp", "vendor", ".git"]
exclude_regex = ["_test\\.go"]
[log]
time = true

41
agent/Dockerfile Normal file
View File

@@ -0,0 +1,41 @@
# syntax=docker/dockerfile:1
# ============================================
# Go Agent - build
# ============================================
FROM golang:1.25.6 AS builder
ARG GO111MODULE=on
ARG GOPROXY=https://goproxy.cn,direct
ENV GO111MODULE=$GO111MODULE
ENV GOPROXY=$GOPROXY
WORKDIR /src
# Cache dependencies
COPY agent/go.mod agent/go.sum ./
RUN go mod download
# Copy source
COPY agent ./agent
WORKDIR /src/agent
# Build (static where possible)
RUN CGO_ENABLED=0 go build -o /out/agent ./cmd/agent
# ============================================
# Go Agent - runtime
# ============================================
FROM debian:bookworm-20260112-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /out/agent /usr/local/bin/agent
CMD ["agent"]

37
agent/cmd/agent/main.go Normal file
View File

@@ -0,0 +1,37 @@
package main
import (
"context"
"fmt"
"os"
"os/signal"
"syscall"
"github.com/yyhuni/lunafox/agent/internal/app"
"github.com/yyhuni/lunafox/agent/internal/config"
"github.com/yyhuni/lunafox/agent/internal/logger"
"go.uber.org/zap"
)
func main() {
if err := logger.Init(os.Getenv("LOG_LEVEL")); err != nil {
fmt.Fprintf(os.Stderr, "logger init failed: %v\n", err)
}
defer logger.Sync()
cfg, err := config.Load(os.Args[1:])
if err != nil {
logger.Log.Fatal("failed to load config", zap.Error(err))
}
wsURL, err := config.BuildWebSocketURL(cfg.ServerURL)
if err != nil {
logger.Log.Fatal("invalid server URL", zap.Error(err))
}
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
if err := app.Run(ctx, *cfg, wsURL); err != nil {
logger.Log.Fatal("agent stopped", zap.Error(err))
}
}

48
agent/go.mod Normal file
View File

@@ -0,0 +1,48 @@
module github.com/yyhuni/lunafox/agent
go 1.24.5
require (
github.com/docker/docker v28.5.2+incompatible
github.com/gorilla/websocket v1.5.3
github.com/opencontainers/image-spec v1.1.1
github.com/shirou/gopsutil/v3 v3.24.5
go.uber.org/zap v1.27.0
)
require (
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/sys/atomicwriter v0.1.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/morikuni/aec v1.1.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.uber.org/multierr v1.10.0 // indirect
golang.org/x/sys v0.39.0 // indirect
golang.org/x/time v0.14.0 // indirect
gotest.tools/v3 v3.5.2 // indirect
)

131
agent/go.sum Normal file
View File

@@ -0,0 +1,131 @@
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM=
github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/morikuni/aec v1.1.0 h1:vBBl0pUnvi/Je71dsRrhMBtreIqNMYErSAbEeb8jrXQ=
github.com/morikuni/aec v1.1.0/go.mod h1:xDRgiq/iw5l+zkao76YTKzKttOp2cwPEne25HDkJnBw=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 h1:ssfIgGNANqpVFCndZvcuyKbl0g+UAVcbBcqGkG28H0Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0/go.mod h1:GQ/474YrbE4Jx8gZ4q5I4hrhUzM6UPzyrqJYV2AqPoQ=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0 h1:f0cb2XPmrqn4XMy9PNliTgRKJgS5WcL/u0/WRYGz4t0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0/go.mod h1:vnakAaFckOMiMtOIhFI2MNH4FYrZzXCYxmb1LlhoGz8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0 h1:Ckwye2FpXkYgiHX7fyVrN1uA/UYd9ounqqTuSNAv0k4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0/go.mod h1:teIFJh5pW2y+AN7riv6IBPX2DuesS3HgP39mwOspKwU=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217 h1:gRkg/vSppuSQoDjxyiGfN4Upv/h/DQmIR10ZU8dh4Ww=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=

139
agent/internal/app/agent.go Normal file
View File

@@ -0,0 +1,139 @@
package app
import (
"context"
"errors"
"os"
"strconv"
"time"
"github.com/yyhuni/lunafox/agent/internal/config"
"github.com/yyhuni/lunafox/agent/internal/docker"
"github.com/yyhuni/lunafox/agent/internal/domain"
"github.com/yyhuni/lunafox/agent/internal/health"
"github.com/yyhuni/lunafox/agent/internal/logger"
"github.com/yyhuni/lunafox/agent/internal/metrics"
"github.com/yyhuni/lunafox/agent/internal/protocol"
"github.com/yyhuni/lunafox/agent/internal/task"
"github.com/yyhuni/lunafox/agent/internal/update"
agentws "github.com/yyhuni/lunafox/agent/internal/websocket"
"go.uber.org/zap"
)
func Run(ctx context.Context, cfg config.Config, wsURL string) error {
configUpdater := config.NewUpdater(cfg)
version := cfg.AgentVersion
hostname := os.Getenv("AGENT_HOSTNAME")
if hostname == "" {
var err error
hostname, err = os.Hostname()
if err != nil || hostname == "" {
hostname = "unknown"
}
}
logger.Log.Info("agent starting",
zap.String("version", version),
zap.String("hostname", hostname),
zap.String("server", cfg.ServerURL),
zap.String("ws", wsURL),
zap.Int("maxTasks", cfg.MaxTasks),
zap.Int("cpuThreshold", cfg.CPUThreshold),
zap.Int("memThreshold", cfg.MemThreshold),
zap.Int("diskThreshold", cfg.DiskThreshold),
)
client := agentws.NewClient(wsURL, cfg.APIKey)
collector := metrics.NewCollector()
healthManager := health.NewManager()
taskCounter := &task.Counter{}
heartbeat := agentws.NewHeartbeatSender(client, collector, healthManager, version, hostname, taskCounter.Count)
taskClient := task.NewClient(cfg.ServerURL, cfg.APIKey)
puller := task.NewPuller(taskClient, collector, taskCounter, cfg.MaxTasks, cfg.CPUThreshold, cfg.MemThreshold, cfg.DiskThreshold)
taskQueue := make(chan *domain.Task, cfg.MaxTasks)
puller.SetOnTask(func(t *domain.Task) {
logger.Log.Info("task received",
zap.Int("taskId", t.ID),
zap.Int("scanId", t.ScanID),
zap.String("workflow", t.WorkflowName),
zap.Int("stage", t.Stage),
zap.String("target", t.TargetName),
)
taskQueue <- t
})
dockerClient, err := docker.NewClient()
if err != nil {
logger.Log.Warn("docker client unavailable", zap.Error(err))
} else {
logger.Log.Info("docker client ready")
}
workerToken := os.Getenv("WORKER_TOKEN")
if workerToken == "" {
return errors.New("WORKER_TOKEN environment variable is required")
}
logger.Log.Info("worker token loaded")
executor := task.NewExecutor(dockerClient, taskClient, taskCounter, cfg.ServerURL, workerToken, version)
defer func() {
shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := executor.Shutdown(shutdownCtx); err != nil && !errors.Is(err, context.DeadlineExceeded) {
logger.Log.Error("executor shutdown error", zap.Error(err))
}
}()
updater := update.NewUpdater(dockerClient, healthManager, puller, executor, configUpdater, cfg.APIKey, workerToken)
handler := agentws.NewHandler()
handler.OnTaskAvailable(puller.NotifyTaskAvailable)
handler.OnTaskCancel(func(taskID int) {
logger.Log.Info("task cancel requested", zap.Int("taskId", taskID))
executor.MarkCancelled(taskID)
executor.CancelTask(taskID)
})
handler.OnConfigUpdate(func(payload protocol.ConfigUpdatePayload) {
logger.Log.Info("config update received",
zap.String("maxTasks", formatOptionalInt(payload.MaxTasks)),
zap.String("cpuThreshold", formatOptionalInt(payload.CPUThreshold)),
zap.String("memThreshold", formatOptionalInt(payload.MemThreshold)),
zap.String("diskThreshold", formatOptionalInt(payload.DiskThreshold)),
)
cfgUpdate := config.Update{
MaxTasks: payload.MaxTasks,
CPUThreshold: payload.CPUThreshold,
MemThreshold: payload.MemThreshold,
DiskThreshold: payload.DiskThreshold,
}
configUpdater.Apply(cfgUpdate)
puller.UpdateConfig(cfgUpdate.MaxTasks, cfgUpdate.CPUThreshold, cfgUpdate.MemThreshold, cfgUpdate.DiskThreshold)
})
handler.OnUpdateRequired(updater.HandleUpdateRequired)
client.SetOnMessage(handler.Handle)
logger.Log.Info("starting heartbeat sender")
go heartbeat.Start(ctx)
logger.Log.Info("starting task puller")
go func() {
_ = puller.Run(ctx)
}()
logger.Log.Info("starting task executor")
go executor.Start(ctx, taskQueue)
logger.Log.Info("connecting to server websocket")
if err := client.Run(ctx); err != nil && !errors.Is(err, context.Canceled) {
return err
}
return nil
}
func formatOptionalInt(value *int) string {
if value == nil {
return "nil"
}
return strconv.Itoa(*value)
}

View File

@@ -0,0 +1,53 @@
package config
import (
"errors"
"fmt"
)
// Config represents runtime settings for the agent.
type Config struct {
ServerURL string
APIKey string
AgentVersion string
MaxTasks int
CPUThreshold int
MemThreshold int
DiskThreshold int
}
// Validate ensures config values are usable.
func (c *Config) Validate() error {
if c.ServerURL == "" {
return errors.New("server URL is required")
}
if c.APIKey == "" {
return errors.New("api key is required")
}
if c.AgentVersion == "" {
return errors.New("AGENT_VERSION environment variable is required")
}
if c.MaxTasks < 1 {
return errors.New("max tasks must be at least 1")
}
if err := validatePercent("cpu threshold", c.CPUThreshold); err != nil {
return err
}
if err := validatePercent("mem threshold", c.MemThreshold); err != nil {
return err
}
if err := validatePercent("disk threshold", c.DiskThreshold); err != nil {
return err
}
if _, err := BuildWebSocketURL(c.ServerURL); err != nil {
return err
}
return nil
}
func validatePercent(name string, value int) error {
if value < 1 || value > 100 {
return fmt.Errorf("%s must be between 1 and 100", name)
}
return nil
}

View File

@@ -0,0 +1,87 @@
package config
import (
"flag"
"fmt"
"os"
"strconv"
"strings"
)
const (
defaultMaxTasks = 5
defaultCPUThreshold = 85
defaultMemThreshold = 85
defaultDiskThreshold = 90
)
// Load parses configuration from environment variables and CLI flags.
func Load(args []string) (*Config, error) {
maxTasks, err := readEnvInt("MAX_TASKS", defaultMaxTasks)
if err != nil {
return nil, err
}
cpuThreshold, err := readEnvInt("CPU_THRESHOLD", defaultCPUThreshold)
if err != nil {
return nil, err
}
memThreshold, err := readEnvInt("MEM_THRESHOLD", defaultMemThreshold)
if err != nil {
return nil, err
}
diskThreshold, err := readEnvInt("DISK_THRESHOLD", defaultDiskThreshold)
if err != nil {
return nil, err
}
cfg := &Config{
ServerURL: strings.TrimSpace(os.Getenv("SERVER_URL")),
APIKey: strings.TrimSpace(os.Getenv("API_KEY")),
AgentVersion: strings.TrimSpace(os.Getenv("AGENT_VERSION")),
MaxTasks: maxTasks,
CPUThreshold: cpuThreshold,
MemThreshold: memThreshold,
DiskThreshold: diskThreshold,
}
fs := flag.NewFlagSet("agent", flag.ContinueOnError)
serverURL := fs.String("server-url", cfg.ServerURL, "Server base URL (e.g. https://1.1.1.1:8080)")
apiKey := fs.String("api-key", cfg.APIKey, "Agent API key")
maxTasksFlag := fs.Int("max-tasks", cfg.MaxTasks, "Maximum concurrent tasks")
cpuThresholdFlag := fs.Int("cpu-threshold", cfg.CPUThreshold, "CPU threshold percentage")
memThresholdFlag := fs.Int("mem-threshold", cfg.MemThreshold, "Memory threshold percentage")
diskThresholdFlag := fs.Int("disk-threshold", cfg.DiskThreshold, "Disk threshold percentage")
if err := fs.Parse(args); err != nil {
return nil, err
}
cfg.ServerURL = strings.TrimSpace(*serverURL)
cfg.APIKey = strings.TrimSpace(*apiKey)
cfg.MaxTasks = *maxTasksFlag
cfg.CPUThreshold = *cpuThresholdFlag
cfg.MemThreshold = *memThresholdFlag
cfg.DiskThreshold = *diskThresholdFlag
if err := cfg.Validate(); err != nil {
return nil, err
}
return cfg, nil
}
func readEnvInt(key string, fallback int) (int, error) {
val, ok := os.LookupEnv(key)
if !ok {
return fallback, nil
}
val = strings.TrimSpace(val)
if val == "" {
return fallback, nil
}
parsed, err := strconv.Atoi(val)
if err != nil {
return 0, fmt.Errorf("invalid %s: %w", key, err)
}
return parsed, nil
}

View File

@@ -0,0 +1,75 @@
package config
import (
"testing"
)
func TestLoadConfigFromEnvAndFlags(t *testing.T) {
t.Setenv("SERVER_URL", "https://example.com")
t.Setenv("API_KEY", "abc12345")
t.Setenv("AGENT_VERSION", "v1.2.3")
t.Setenv("MAX_TASKS", "5")
t.Setenv("CPU_THRESHOLD", "80")
t.Setenv("MEM_THRESHOLD", "81")
t.Setenv("DISK_THRESHOLD", "82")
cfg, err := Load([]string{})
if err != nil {
t.Fatalf("load failed: %v", err)
}
if cfg.ServerURL != "https://example.com" {
t.Fatalf("expected server url from env")
}
if cfg.MaxTasks != 5 {
t.Fatalf("expected max tasks from env")
}
args := []string{
"--server-url=https://override.example.com",
"--api-key=deadbeef",
"--max-tasks=9",
"--cpu-threshold=70",
"--mem-threshold=71",
"--disk-threshold=72",
}
cfg, err = Load(args)
if err != nil {
t.Fatalf("load failed: %v", err)
}
if cfg.ServerURL != "https://override.example.com" {
t.Fatalf("expected server url from args")
}
if cfg.APIKey != "deadbeef" {
t.Fatalf("expected api key from args")
}
if cfg.MaxTasks != 9 {
t.Fatalf("expected max tasks from args")
}
if cfg.CPUThreshold != 70 || cfg.MemThreshold != 71 || cfg.DiskThreshold != 72 {
t.Fatalf("expected thresholds from args")
}
}
func TestLoadConfigMissingRequired(t *testing.T) {
t.Setenv("SERVER_URL", "")
t.Setenv("API_KEY", "")
t.Setenv("AGENT_VERSION", "v1.2.3")
_, err := Load([]string{})
if err == nil {
t.Fatalf("expected error when required values missing")
}
}
func TestLoadConfigInvalidEnvValue(t *testing.T) {
t.Setenv("SERVER_URL", "https://example.com")
t.Setenv("API_KEY", "abc")
t.Setenv("AGENT_VERSION", "v1.2.3")
t.Setenv("MAX_TASKS", "nope")
_, err := Load([]string{})
if err == nil {
t.Fatalf("expected error for invalid MAX_TASKS")
}
}

View File

@@ -0,0 +1,49 @@
package config
import (
"sync"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
// Update holds optional configuration updates.
type Update = domain.ConfigUpdate
// Updater manages runtime configuration changes.
type Updater struct {
mu sync.RWMutex
cfg Config
}
// NewUpdater creates an updater with initial config.
func NewUpdater(cfg Config) *Updater {
return &Updater{cfg: cfg}
}
// Apply updates the configuration and returns the new snapshot.
func (u *Updater) Apply(update Update) Config {
u.mu.Lock()
defer u.mu.Unlock()
if update.MaxTasks != nil && *update.MaxTasks > 0 {
u.cfg.MaxTasks = *update.MaxTasks
}
if update.CPUThreshold != nil && *update.CPUThreshold > 0 {
u.cfg.CPUThreshold = *update.CPUThreshold
}
if update.MemThreshold != nil && *update.MemThreshold > 0 {
u.cfg.MemThreshold = *update.MemThreshold
}
if update.DiskThreshold != nil && *update.DiskThreshold > 0 {
u.cfg.DiskThreshold = *update.DiskThreshold
}
return u.cfg
}
// Snapshot returns a copy of current config.
func (u *Updater) Snapshot() Config {
u.mu.RLock()
defer u.mu.RUnlock()
return u.cfg
}

View File

@@ -0,0 +1,39 @@
package config
import "testing"
func TestUpdaterApplyAndSnapshot(t *testing.T) {
cfg := Config{
ServerURL: "https://example.com",
APIKey: "key",
MaxTasks: 2,
CPUThreshold: 70,
MemThreshold: 80,
DiskThreshold: 90,
}
updater := NewUpdater(cfg)
snapshot := updater.Snapshot()
if snapshot.MaxTasks != 2 || snapshot.CPUThreshold != 70 {
t.Fatalf("unexpected snapshot values")
}
invalid := 0
update := Update{MaxTasks: &invalid, CPUThreshold: &invalid}
snapshot = updater.Apply(update)
if snapshot.MaxTasks != 2 || snapshot.CPUThreshold != 70 {
t.Fatalf("expected invalid update to be ignored")
}
maxTasks := 5
cpu := 85
mem := 60
snapshot = updater.Apply(Update{
MaxTasks: &maxTasks,
CPUThreshold: &cpu,
MemThreshold: &mem,
})
if snapshot.MaxTasks != 5 || snapshot.CPUThreshold != 85 || snapshot.MemThreshold != 60 {
t.Fatalf("unexpected applied update")
}
}

View File

@@ -0,0 +1,50 @@
package config
import (
"errors"
"fmt"
"net/url"
"strings"
)
// BuildWebSocketURL derives the agent WebSocket endpoint from the server URL.
func BuildWebSocketURL(serverURL string) (string, error) {
trimmed := strings.TrimSpace(serverURL)
if trimmed == "" {
return "", errors.New("server URL is required")
}
parsed, err := url.Parse(trimmed)
if err != nil {
return "", err
}
switch strings.ToLower(parsed.Scheme) {
case "http":
parsed.Scheme = "ws"
case "https":
parsed.Scheme = "wss"
case "ws", "wss":
default:
if parsed.Scheme == "" {
return "", errors.New("server URL scheme is required")
}
return "", fmt.Errorf("unsupported server URL scheme: %s", parsed.Scheme)
}
parsed.Path = buildWSPath(parsed.Path)
parsed.RawQuery = ""
parsed.Fragment = ""
return parsed.String(), nil
}
func buildWSPath(path string) string {
trimmed := strings.TrimRight(path, "/")
if trimmed == "" {
return "/api/agents/ws"
}
if strings.HasSuffix(trimmed, "/api") {
return trimmed + "/agents/ws"
}
return trimmed + "/api/agents/ws"
}

View File

@@ -0,0 +1,38 @@
package config
import "testing"
func TestBuildWebSocketURL(t *testing.T) {
tests := []struct {
input string
expected string
}{
{"https://example.com", "wss://example.com/api/agents/ws"},
{"http://example.com", "ws://example.com/api/agents/ws"},
{"https://example.com/api", "wss://example.com/api/agents/ws"},
{"https://example.com/base", "wss://example.com/base/api/agents/ws"},
{"wss://example.com", "wss://example.com/api/agents/ws"},
}
for _, tt := range tests {
got, err := BuildWebSocketURL(tt.input)
if err != nil {
t.Fatalf("unexpected error for %s: %v", tt.input, err)
}
if got != tt.expected {
t.Fatalf("input %s expected %s got %s", tt.input, tt.expected, got)
}
}
}
func TestBuildWebSocketURLInvalid(t *testing.T) {
if _, err := BuildWebSocketURL("example.com"); err == nil {
t.Fatalf("expected error for missing scheme")
}
if _, err := BuildWebSocketURL(" "); err == nil {
t.Fatalf("expected error for empty url")
}
if _, err := BuildWebSocketURL("ftp://example.com"); err == nil {
t.Fatalf("expected error for unsupported scheme")
}
}

View File

@@ -0,0 +1,23 @@
package docker
import (
"context"
"github.com/docker/docker/api/types/container"
)
// Remove removes the container.
func (c *Client) Remove(ctx context.Context, containerID string) error {
return c.cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
Force: true,
RemoveVolumes: true,
})
}
// Stop stops a running container with a timeout.
func (c *Client) Stop(ctx context.Context, containerID string) error {
timeout := 10
return c.cli.ContainerStop(ctx, containerID, container.StopOptions{
Timeout: &timeout,
})
}

View File

@@ -0,0 +1,46 @@
package docker
import (
"context"
"io"
"github.com/docker/docker/api/types/container"
imagetypes "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/client"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// Client wraps the Docker SDK client.
type Client struct {
cli *client.Client
}
// NewClient creates a Docker client using environment configuration.
func NewClient() (*Client, error) {
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
if err != nil {
return nil, err
}
return &Client{cli: cli}, nil
}
// Close closes the Docker client.
func (c *Client) Close() error {
return c.cli.Close()
}
// ImagePull pulls an image from the registry.
func (c *Client) ImagePull(ctx context.Context, imageRef string) (io.ReadCloser, error) {
return c.cli.ImagePull(ctx, imageRef, imagetypes.PullOptions{})
}
// ContainerCreate creates a container.
func (c *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *ocispec.Platform, name string) (container.CreateResponse, error) {
return c.cli.ContainerCreate(ctx, config, hostConfig, networkingConfig, platform, name)
}
// ContainerStart starts a container.
func (c *Client) ContainerStart(ctx context.Context, containerID string, opts container.StartOptions) error {
return c.cli.ContainerStart(ctx, containerID, opts)
}

View File

@@ -0,0 +1,49 @@
package docker
import (
"bytes"
"context"
"io"
"strconv"
"strings"
"github.com/docker/docker/api/types/container"
)
const (
maxErrorBytes = 4096
)
// TailLogs returns the last N lines of container logs, truncated to 4KB.
func (c *Client) TailLogs(ctx context.Context, containerID string, lines int) (string, error) {
reader, err := c.cli.ContainerLogs(ctx, containerID, container.LogsOptions{
ShowStdout: true,
ShowStderr: true,
Timestamps: false,
Tail: strconv.Itoa(lines),
})
if err != nil {
return "", err
}
defer reader.Close()
var buf bytes.Buffer
if _, err := io.Copy(&buf, reader); err != nil {
return "", err
}
out := buf.String()
out = strings.TrimSpace(out)
if len(out) > maxErrorBytes {
out = out[len(out)-maxErrorBytes:]
}
return out, nil
}
// TruncateErrorMessage clamps message length to 4KB.
func TruncateErrorMessage(message string) string {
if len(message) <= maxErrorBytes {
return message
}
return message[:maxErrorBytes]
}

View File

@@ -0,0 +1,22 @@
package docker
import (
"strings"
"testing"
)
func TestTruncateErrorMessage(t *testing.T) {
short := "short message"
if got := TruncateErrorMessage(short); got != short {
t.Fatalf("expected message to stay unchanged")
}
long := strings.Repeat("x", maxErrorBytes+10)
got := TruncateErrorMessage(long)
if len(got) != maxErrorBytes {
t.Fatalf("expected length %d, got %d", maxErrorBytes, len(got))
}
if got != long[:maxErrorBytes] {
t.Fatalf("unexpected truncation result")
}
}

View File

@@ -0,0 +1,20 @@
package docker
import (
"context"
"github.com/docker/docker/api/types/container"
)
// Wait waits for a container to stop and returns the exit code.
func (c *Client) Wait(ctx context.Context, containerID string) (int64, error) {
statusCh, errCh := c.cli.ContainerWait(ctx, containerID, container.WaitConditionNotRunning)
select {
case status := <-statusCh:
return status.StatusCode, nil
case err := <-errCh:
return 0, err
case <-ctx.Done():
return 0, ctx.Err()
}
}

View File

@@ -0,0 +1,76 @@
package docker
import (
"context"
"fmt"
"os"
"strings"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/strslice"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
const workerImagePrefix = "yyhuni/lunafox-worker:"
// StartWorker starts a worker container for a task and returns the container ID.
func (c *Client) StartWorker(ctx context.Context, t *domain.Task, serverURL, serverToken, agentVersion string) (string, error) {
if t == nil {
return "", fmt.Errorf("task is nil")
}
if err := os.MkdirAll(t.WorkspaceDir, 0755); err != nil {
return "", fmt.Errorf("prepare workspace: %w", err)
}
image, err := resolveWorkerImage(agentVersion)
if err != nil {
return "", err
}
env := buildWorkerEnv(t, serverURL, serverToken)
config := &container.Config{
Image: image,
Env: env,
Cmd: strslice.StrSlice{},
}
hostConfig := &container.HostConfig{
Binds: []string{"/opt/lunafox:/opt/lunafox"},
AutoRemove: false,
OomScoreAdj: 500,
}
resp, err := c.cli.ContainerCreate(ctx, config, hostConfig, &network.NetworkingConfig{}, nil, "")
if err != nil {
return "", err
}
if err := c.cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil {
return "", err
}
return resp.ID, nil
}
func resolveWorkerImage(version string) (string, error) {
version = strings.TrimSpace(version)
if version == "" {
return "", fmt.Errorf("worker version is required")
}
return workerImagePrefix + version, nil
}
func buildWorkerEnv(t *domain.Task, serverURL, serverToken string) []string {
return []string{
fmt.Sprintf("SERVER_URL=%s", serverURL),
fmt.Sprintf("SERVER_TOKEN=%s", serverToken),
fmt.Sprintf("SCAN_ID=%d", t.ScanID),
fmt.Sprintf("TARGET_ID=%d", t.TargetID),
fmt.Sprintf("TARGET_NAME=%s", t.TargetName),
fmt.Sprintf("TARGET_TYPE=%s", t.TargetType),
fmt.Sprintf("WORKFLOW_NAME=%s", t.WorkflowName),
fmt.Sprintf("WORKSPACE_DIR=%s", t.WorkspaceDir),
fmt.Sprintf("CONFIG=%s", t.Config),
}
}

View File

@@ -0,0 +1,50 @@
package docker
import (
"testing"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
func TestResolveWorkerImage(t *testing.T) {
if _, err := resolveWorkerImage(""); err == nil {
t.Fatalf("expected error for empty version")
}
if got, err := resolveWorkerImage("v1.2.3"); err != nil || got != workerImagePrefix+"v1.2.3" {
t.Fatalf("expected version image, got %s, err: %v", got, err)
}
}
func TestBuildWorkerEnv(t *testing.T) {
spec := &domain.Task{
ScanID: 1,
TargetID: 2,
TargetName: "example.com",
TargetType: "domain",
WorkflowName: "subdomain_discovery",
WorkspaceDir: "/opt/lunafox/results",
Config: "config-yaml",
}
env := buildWorkerEnv(spec, "https://server", "token")
expected := []string{
"SERVER_URL=https://server",
"SERVER_TOKEN=token",
"SCAN_ID=1",
"TARGET_ID=2",
"TARGET_NAME=example.com",
"TARGET_TYPE=domain",
"WORKFLOW_NAME=subdomain_discovery",
"WORKSPACE_DIR=/opt/lunafox/results",
"CONFIG=config-yaml",
}
if len(env) != len(expected) {
t.Fatalf("expected %d env entries, got %d", len(expected), len(env))
}
for i, item := range expected {
if env[i] != item {
t.Fatalf("expected env[%d]=%s got %s", i, item, env[i])
}
}
}

View File

@@ -0,0 +1,8 @@
package domain
type ConfigUpdate struct {
MaxTasks *int `json:"maxTasks"`
CPUThreshold *int `json:"cpuThreshold"`
MemThreshold *int `json:"memThreshold"`
DiskThreshold *int `json:"diskThreshold"`
}

View File

@@ -0,0 +1,10 @@
package domain
import "time"
type HealthStatus struct {
State string `json:"state"`
Reason string `json:"reason,omitempty"`
Message string `json:"message,omitempty"`
Since *time.Time `json:"since,omitempty"`
}

View File

@@ -0,0 +1,13 @@
package domain
type Task struct {
ID int `json:"taskId"`
ScanID int `json:"scanId"`
Stage int `json:"stage"`
WorkflowName string `json:"workflowName"`
TargetID int `json:"targetId"`
TargetName string `json:"targetName"`
TargetType string `json:"targetType"`
WorkspaceDir string `json:"workspaceDir"`
Config string `json:"config"`
}

View File

@@ -0,0 +1,6 @@
package domain
type UpdateRequiredPayload struct {
Version string `json:"version"`
Image string `json:"image"`
}

View File

@@ -0,0 +1,51 @@
package health
import (
"sync"
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
// Status represents the agent health state reported in heartbeats.
type Status = domain.HealthStatus
// Manager stores current health status.
type Manager struct {
mu sync.RWMutex
status Status
}
// NewManager initializes the manager with ok status.
func NewManager() *Manager {
return &Manager{
status: Status{State: "ok"},
}
}
// Get returns a snapshot of current status.
func (m *Manager) Get() Status {
m.mu.RLock()
defer m.mu.RUnlock()
return m.status
}
// Set updates health status and timestamps transitions.
func (m *Manager) Set(state, reason, message string) {
m.mu.Lock()
defer m.mu.Unlock()
if m.status.State != state {
now := time.Now().UTC()
m.status.Since = &now
}
m.status.State = state
m.status.Reason = reason
m.status.Message = message
if state == "ok" {
m.status.Since = nil
m.status.Reason = ""
m.status.Message = ""
}
}

View File

@@ -0,0 +1,33 @@
package health
import "testing"
func TestManagerSetTransitions(t *testing.T) {
mgr := NewManager()
initial := mgr.Get()
if initial.State != "ok" || initial.Since != nil {
t.Fatalf("expected initial ok status")
}
mgr.Set("paused", "update", "waiting")
status := mgr.Get()
if status.State != "paused" || status.Since == nil {
t.Fatalf("expected paused state with timestamp")
}
prevSince := status.Since
mgr.Set("paused", "still", "waiting more")
status = mgr.Get()
if status.Since == nil || !status.Since.Equal(*prevSince) {
t.Fatalf("expected unchanged since on same state")
}
if status.Reason != "still" || status.Message != "waiting more" {
t.Fatalf("expected updated reason/message")
}
mgr.Set("ok", "ignored", "ignored")
status = mgr.Get()
if status.State != "ok" || status.Since != nil || status.Reason != "" || status.Message != "" {
t.Fatalf("expected ok reset to clear fields")
}
}

View File

@@ -0,0 +1,50 @@
package logger
import (
"os"
"strings"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
// Log is the shared agent logger. Defaults to a no-op logger until initialized.
var Log = zap.NewNop()
// Init configures the logger using the provided level and ENV.
func Init(level string) error {
level = strings.TrimSpace(level)
if level == "" {
level = "info"
}
var zapLevel zapcore.Level
if err := zapLevel.UnmarshalText([]byte(level)); err != nil {
zapLevel = zapcore.InfoLevel
}
isDev := strings.EqualFold(os.Getenv("ENV"), "development")
var config zap.Config
if isDev {
config = zap.NewDevelopmentConfig()
config.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
} else {
config = zap.NewProductionConfig()
}
config.Level = zap.NewAtomicLevelAt(zapLevel)
logger, err := config.Build()
if err != nil {
Log = zap.NewNop()
return err
}
Log = logger
return nil
}
// Sync flushes any buffered log entries.
func Sync() {
if Log != nil {
_ = Log.Sync()
}
}

View File

@@ -0,0 +1,58 @@
package metrics
import (
"github.com/shirou/gopsutil/v3/cpu"
"github.com/shirou/gopsutil/v3/disk"
"github.com/shirou/gopsutil/v3/mem"
"github.com/yyhuni/lunafox/agent/internal/logger"
"go.uber.org/zap"
)
// Collector gathers system metrics.
type Collector struct{}
// NewCollector creates a new Collector.
func NewCollector() *Collector {
return &Collector{}
}
// Sample returns CPU, memory, and disk usage percentages.
func (c *Collector) Sample() (float64, float64, float64) {
cpuPercent, err := cpuUsagePercent()
if err != nil {
logger.Log.Warn("metrics: cpu percent error", zap.Error(err))
}
memPercent, err := memUsagePercent()
if err != nil {
logger.Log.Warn("metrics: mem percent error", zap.Error(err))
}
diskPercent, err := diskUsagePercent("/")
if err != nil {
logger.Log.Warn("metrics: disk percent error", zap.Error(err))
}
return cpuPercent, memPercent, diskPercent
}
func cpuUsagePercent() (float64, error) {
values, err := cpu.Percent(0, false)
if err != nil || len(values) == 0 {
return 0, err
}
return values[0], nil
}
func memUsagePercent() (float64, error) {
info, err := mem.VirtualMemory()
if err != nil {
return 0, err
}
return info.UsedPercent, nil
}
func diskUsagePercent(path string) (float64, error) {
info, err := disk.Usage(path)
if err != nil {
return 0, err
}
return info.UsedPercent, nil
}

View File

@@ -0,0 +1,11 @@
package metrics
import "testing"
func TestCollectorSample(t *testing.T) {
c := NewCollector()
cpu, mem, disk := c.Sample()
if cpu < 0 || mem < 0 || disk < 0 {
t.Fatalf("expected non-negative metrics")
}
}

View File

@@ -0,0 +1,42 @@
package protocol
import (
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
const (
MessageTypeHeartbeat = "heartbeat"
MessageTypeTaskAvailable = "task_available"
MessageTypeTaskCancel = "task_cancel"
MessageTypeConfigUpdate = "config_update"
MessageTypeUpdateRequired = "update_required"
)
type Message struct {
Type string `json:"type"`
Payload interface{} `json:"payload"`
Timestamp time.Time `json:"timestamp"`
}
type HealthStatus = domain.HealthStatus
type HeartbeatPayload struct {
CPU float64 `json:"cpu"`
Mem float64 `json:"mem"`
Disk float64 `json:"disk"`
Tasks int `json:"tasks"`
Version string `json:"version"`
Hostname string `json:"hostname"`
Uptime int64 `json:"uptime"`
Health HealthStatus `json:"health"`
}
type ConfigUpdatePayload = domain.ConfigUpdate
type UpdateRequiredPayload = domain.UpdateRequiredPayload
type TaskCancelPayload struct {
TaskID int `json:"taskId"`
}

View File

@@ -0,0 +1,118 @@
package task
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"fmt"
"net/http"
"strings"
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
// Client handles HTTP API requests to the server.
type Client struct {
baseURL string
apiKey string
http *http.Client
}
// NewClient creates a new task client.
func NewClient(serverURL, apiKey string) *Client {
transport := http.DefaultTransport
if base, ok := transport.(*http.Transport); ok {
clone := base.Clone()
clone.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
transport = clone
}
return &Client{
baseURL: strings.TrimRight(serverURL, "/"),
apiKey: apiKey,
http: &http.Client{
Timeout: 15 * time.Second,
Transport: transport,
},
}
}
// PullTask requests a task from the server. Returns nil when no task available.
func (c *Client) PullTask(ctx context.Context) (*domain.Task, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodPost, c.baseURL+"/api/agent/tasks/pull", nil)
if err != nil {
return nil, err
}
req.Header.Set("X-Agent-Key", c.apiKey)
resp, err := c.http.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNoContent {
return nil, nil
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("pull task failed: status %d", resp.StatusCode)
}
var task domain.Task
if err := json.NewDecoder(resp.Body).Decode(&task); err != nil {
return nil, err
}
return &task, nil
}
// UpdateStatus reports task status to the server with retry.
func (c *Client) UpdateStatus(ctx context.Context, taskID int, status, errorMessage string) error {
payload := map[string]string{
"status": status,
}
if errorMessage != "" {
payload["errorMessage"] = errorMessage
}
body, err := json.Marshal(payload)
if err != nil {
return err
}
var lastErr error
for attempt := 0; attempt < 3; attempt++ {
if attempt > 0 {
backoff := time.Duration(5<<attempt) * time.Second // 5s, 10s, 20s
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(backoff):
}
}
req, err := http.NewRequestWithContext(ctx, http.MethodPatch, fmt.Sprintf("%s/api/agent/tasks/%d/status", c.baseURL, taskID), bytes.NewReader(body))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("X-Agent-Key", c.apiKey)
resp, err := c.http.Do(req)
if err != nil {
lastErr = err
continue
}
resp.Body.Close()
if resp.StatusCode == http.StatusOK {
return nil
}
lastErr = fmt.Errorf("update status failed: status %d", resp.StatusCode)
// Don't retry 4xx client errors (except 429)
if resp.StatusCode >= 400 && resp.StatusCode < 500 && resp.StatusCode != 429 {
return lastErr
}
}
return lastErr
}

View File

@@ -0,0 +1,187 @@
package task
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
"strings"
"testing"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
func TestClientPullTaskNoContent(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
if r.URL.Path != "/api/agent/tasks/pull" {
t.Fatalf("unexpected path %s", r.URL.Path)
}
return &http.Response{
StatusCode: http.StatusNoContent,
Body: io.NopCloser(strings.NewReader("")),
Header: http.Header{},
}, nil
}),
},
}
task, err := client.PullTask(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if task != nil {
t.Fatalf("expected nil task")
}
}
func TestClientPullTaskOK(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
if r.Header.Get("X-Agent-Key") == "" {
t.Fatalf("missing api key header")
}
body, _ := json.Marshal(domain.Task{ID: 1})
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewReader(body)),
Header: http.Header{},
}, nil
}),
},
}
task, err := client.PullTask(context.Background())
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if task == nil || task.ID != 1 {
t.Fatalf("unexpected task")
}
}
func TestClientUpdateStatus(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
if r.Method != http.MethodPatch {
t.Fatalf("expected PATCH")
}
if r.Header.Get("X-Agent-Key") == "" {
t.Fatalf("missing api key header")
}
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(strings.NewReader("")),
Header: http.Header{},
}, nil
}),
},
}
if err := client.UpdateStatus(context.Background(), 1, "completed", ""); err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func TestClientPullTaskErrorStatus(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: http.StatusBadRequest,
Body: io.NopCloser(strings.NewReader("bad")),
Header: http.Header{},
}, nil
}),
},
}
if _, err := client.PullTask(context.Background()); err == nil {
t.Fatalf("expected error for non-200 status")
}
}
func TestClientPullTaskBadJSON(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(strings.NewReader("{bad json")),
Header: http.Header{},
}, nil
}),
},
}
if _, err := client.PullTask(context.Background()); err == nil {
t.Fatalf("expected error for invalid json")
}
}
func TestClientUpdateStatusIncludesErrorMessage(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
body, err := io.ReadAll(r.Body)
if err != nil {
t.Fatalf("read body: %v", err)
}
var payload map[string]string
if err := json.Unmarshal(body, &payload); err != nil {
t.Fatalf("unmarshal body: %v", err)
}
if payload["status"] != "failed" {
t.Fatalf("expected status failed")
}
if payload["errorMessage"] != "boom" {
t.Fatalf("expected error message")
}
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(strings.NewReader("")),
Header: http.Header{},
}, nil
}),
},
}
if err := client.UpdateStatus(context.Background(), 1, "failed", "boom"); err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func TestClientUpdateStatusErrorStatus(t *testing.T) {
client := &Client{
baseURL: "http://example",
apiKey: "key",
http: &http.Client{
Transport: roundTripFunc(func(r *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(strings.NewReader("")),
Header: http.Header{},
}, nil
}),
},
}
if err := client.UpdateStatus(context.Background(), 1, "completed", ""); err == nil {
t.Fatalf("expected error for non-200 status")
}
}
type roundTripFunc func(*http.Request) (*http.Response, error)
func (f roundTripFunc) RoundTrip(r *http.Request) (*http.Response, error) {
return f(r)
}

View File

@@ -0,0 +1,23 @@
package task
import "sync/atomic"
// Counter tracks running task count.
type Counter struct {
value int64
}
// Inc increments the counter.
func (c *Counter) Inc() {
atomic.AddInt64(&c.value, 1)
}
// Dec decrements the counter.
func (c *Counter) Dec() {
atomic.AddInt64(&c.value, -1)
}
// Count returns current count.
func (c *Counter) Count() int {
return int(atomic.LoadInt64(&c.value))
}

View File

@@ -0,0 +1,18 @@
package task
import "testing"
func TestCounterIncDec(t *testing.T) {
var counter Counter
counter.Inc()
counter.Inc()
if got := counter.Count(); got != 2 {
t.Fatalf("expected count 2, got %d", got)
}
counter.Dec()
if got := counter.Count(); got != 1 {
t.Fatalf("expected count 1, got %d", got)
}
}

View File

@@ -0,0 +1,258 @@
package task
import (
"context"
"errors"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/yyhuni/lunafox/agent/internal/docker"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
const defaultMaxRuntime = 7 * 24 * time.Hour
// Executor runs tasks inside worker containers.
type Executor struct {
docker DockerRunner
client statusReporter
counter *Counter
serverURL string
workerToken string
agentVersion string
maxRuntime time.Duration
mu sync.Mutex
running map[int]context.CancelFunc
cancelMu sync.Mutex
cancelled map[int]struct{}
wg sync.WaitGroup
stopping atomic.Bool
}
type statusReporter interface {
UpdateStatus(ctx context.Context, taskID int, status, errorMessage string) error
}
type DockerRunner interface {
StartWorker(ctx context.Context, t *domain.Task, serverURL, serverToken, agentVersion string) (string, error)
Wait(ctx context.Context, containerID string) (int64, error)
Stop(ctx context.Context, containerID string) error
Remove(ctx context.Context, containerID string) error
TailLogs(ctx context.Context, containerID string, lines int) (string, error)
}
// NewExecutor creates an Executor.
func NewExecutor(dockerClient DockerRunner, taskClient statusReporter, counter *Counter, serverURL, workerToken, agentVersion string) *Executor {
return &Executor{
docker: dockerClient,
client: taskClient,
counter: counter,
serverURL: serverURL,
workerToken: workerToken,
agentVersion: agentVersion,
maxRuntime: defaultMaxRuntime,
running: map[int]context.CancelFunc{},
cancelled: map[int]struct{}{},
}
}
// Start processes tasks from the queue.
func (e *Executor) Start(ctx context.Context, tasks <-chan *domain.Task) {
for {
select {
case <-ctx.Done():
return
case t, ok := <-tasks:
if !ok {
return
}
if t == nil {
continue
}
if e.stopping.Load() {
// During shutdown/update: drain the queue but don't start new work.
continue
}
if e.isCancelled(t.ID) {
e.reportStatus(ctx, t.ID, "cancelled", "")
e.clearCancelled(t.ID)
continue
}
go e.execute(ctx, t)
}
}
}
// CancelTask requests cancellation of a running task.
func (e *Executor) CancelTask(taskID int) {
e.mu.Lock()
cancel := e.running[taskID]
e.mu.Unlock()
if cancel != nil {
cancel()
}
}
// MarkCancelled records a task as cancelled to prevent execution.
func (e *Executor) MarkCancelled(taskID int) {
e.cancelMu.Lock()
e.cancelled[taskID] = struct{}{}
e.cancelMu.Unlock()
}
func (e *Executor) reportStatus(ctx context.Context, taskID int, status, errorMessage string) {
if e.client == nil {
return
}
statusCtx, cancel := context.WithTimeout(context.WithoutCancel(ctx), 30*time.Second)
defer cancel()
_ = e.client.UpdateStatus(statusCtx, taskID, status, errorMessage)
}
func (e *Executor) execute(ctx context.Context, t *domain.Task) {
e.wg.Add(1)
defer e.wg.Done()
defer e.clearCancelled(t.ID)
if e.counter != nil {
e.counter.Inc()
defer e.counter.Dec()
}
if e.workerToken == "" {
e.reportStatus(ctx, t.ID, "failed", "missing worker token")
return
}
if e.docker == nil {
e.reportStatus(ctx, t.ID, "failed", "docker client unavailable")
return
}
runCtx, cancel := context.WithTimeout(ctx, e.maxRuntime)
defer cancel()
containerID, err := e.docker.StartWorker(runCtx, t, e.serverURL, e.workerToken, e.agentVersion)
if err != nil {
message := docker.TruncateErrorMessage(err.Error())
e.reportStatus(ctx, t.ID, "failed", message)
return
}
defer func() {
_ = e.docker.Remove(context.Background(), containerID)
}()
e.trackCancel(t.ID, cancel)
defer e.clearCancel(t.ID)
exitCode, waitErr := e.docker.Wait(runCtx, containerID)
if waitErr != nil {
if errors.Is(waitErr, context.DeadlineExceeded) || errors.Is(runCtx.Err(), context.DeadlineExceeded) {
e.handleTimeout(ctx, t, containerID)
return
}
if errors.Is(waitErr, context.Canceled) || errors.Is(runCtx.Err(), context.Canceled) {
e.handleCancel(ctx, t, containerID)
return
}
message := docker.TruncateErrorMessage(waitErr.Error())
e.reportStatus(ctx, t.ID, "failed", message)
return
}
if runCtx.Err() != nil {
if errors.Is(runCtx.Err(), context.DeadlineExceeded) {
e.handleTimeout(ctx, t, containerID)
return
}
if errors.Is(runCtx.Err(), context.Canceled) {
e.handleCancel(ctx, t, containerID)
return
}
}
if exitCode == 0 {
e.reportStatus(ctx, t.ID, "completed", "")
return
}
logs, _ := e.docker.TailLogs(context.Background(), containerID, 100)
message := logs
if message == "" {
message = fmt.Sprintf("container exited with code %d", exitCode)
}
message = docker.TruncateErrorMessage(message)
e.reportStatus(ctx, t.ID, "failed", message)
}
func (e *Executor) handleCancel(ctx context.Context, t *domain.Task, containerID string) {
_ = e.docker.Stop(context.Background(), containerID)
e.reportStatus(ctx, t.ID, "cancelled", "")
}
func (e *Executor) handleTimeout(ctx context.Context, t *domain.Task, containerID string) {
_ = e.docker.Stop(context.Background(), containerID)
message := docker.TruncateErrorMessage("task timed out")
e.reportStatus(ctx, t.ID, "failed", message)
}
func (e *Executor) trackCancel(taskID int, cancel context.CancelFunc) {
e.mu.Lock()
defer e.mu.Unlock()
e.running[taskID] = cancel
}
func (e *Executor) clearCancel(taskID int) {
e.mu.Lock()
defer e.mu.Unlock()
delete(e.running, taskID)
}
func (e *Executor) isCancelled(taskID int) bool {
e.cancelMu.Lock()
defer e.cancelMu.Unlock()
_, ok := e.cancelled[taskID]
return ok
}
func (e *Executor) clearCancelled(taskID int) {
e.cancelMu.Lock()
delete(e.cancelled, taskID)
e.cancelMu.Unlock()
}
// CancelAll requests cancellation for all running tasks.
func (e *Executor) CancelAll() {
e.mu.Lock()
cancels := make([]context.CancelFunc, 0, len(e.running))
for _, cancel := range e.running {
cancels = append(cancels, cancel)
}
e.mu.Unlock()
for _, cancel := range cancels {
cancel()
}
}
// Shutdown cancels running tasks and waits for completion.
func (e *Executor) Shutdown(ctx context.Context) error {
e.stopping.Store(true)
e.CancelAll()
done := make(chan struct{})
go func() {
e.wg.Wait()
close(done)
}()
select {
case <-ctx.Done():
return ctx.Err()
case <-done:
return nil
}
}

View File

@@ -0,0 +1,107 @@
package task
import (
"context"
"testing"
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
type fakeReporter struct {
status string
msg string
}
func (f *fakeReporter) UpdateStatus(ctx context.Context, taskID int, status, errorMessage string) error {
f.status = status
f.msg = errorMessage
return nil
}
func TestExecutorMissingWorkerToken(t *testing.T) {
reporter := &fakeReporter{}
exec := &Executor{
client: reporter,
serverURL: "https://server",
workerToken: "",
}
exec.execute(context.Background(), &domain.Task{ID: 1})
if reporter.status != "failed" {
t.Fatalf("expected failed status, got %s", reporter.status)
}
if reporter.msg == "" {
t.Fatalf("expected error message")
}
}
func TestExecutorDockerUnavailable(t *testing.T) {
reporter := &fakeReporter{}
exec := &Executor{
client: reporter,
serverURL: "https://server",
workerToken: "token",
}
exec.execute(context.Background(), &domain.Task{ID: 2})
if reporter.status != "failed" {
t.Fatalf("expected failed status, got %s", reporter.status)
}
if reporter.msg == "" {
t.Fatalf("expected error message")
}
}
func TestExecutorCancelAll(t *testing.T) {
exec := &Executor{
running: map[int]context.CancelFunc{},
}
calls := 0
exec.running[1] = func() { calls++ }
exec.running[2] = func() { calls++ }
exec.CancelAll()
if calls != 2 {
t.Fatalf("expected cancel calls, got %d", calls)
}
}
func TestExecutorShutdownWaits(t *testing.T) {
exec := &Executor{
running: map[int]context.CancelFunc{},
}
calls := 0
exec.running[1] = func() { calls++ }
exec.wg.Add(1)
go func() {
time.Sleep(10 * time.Millisecond)
exec.wg.Done()
}()
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
if err := exec.Shutdown(ctx); err != nil {
t.Fatalf("unexpected error: %v", err)
}
if calls != 1 {
t.Fatalf("expected cancel call")
}
}
func TestExecutorShutdownTimeout(t *testing.T) {
exec := &Executor{
running: map[int]context.CancelFunc{},
}
exec.wg.Add(1)
defer exec.wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancel()
if err := exec.Shutdown(ctx); err == nil {
t.Fatalf("expected timeout error")
}
}

View File

@@ -0,0 +1,252 @@
package task
import (
"context"
"errors"
"math"
"math/rand"
"sync"
"sync/atomic"
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
// Puller coordinates task pulling with load gating and backoff.
type Puller struct {
client TaskPuller
collector MetricsSampler
counter *Counter
maxTasks int
cpuThreshold int
memThreshold int
diskThreshold int
onTask func(*domain.Task)
notifyCh chan struct{}
emptyBackoff []time.Duration
emptyIdx int
errorBackoff time.Duration
errorMax time.Duration
randSrc *rand.Rand
mu sync.RWMutex
paused atomic.Bool
}
type MetricsSampler interface {
Sample() (float64, float64, float64)
}
type TaskPuller interface {
PullTask(ctx context.Context) (*domain.Task, error)
}
// NewPuller creates a new Puller.
func NewPuller(client TaskPuller, collector MetricsSampler, counter *Counter, maxTasks, cpuThreshold, memThreshold, diskThreshold int) *Puller {
return &Puller{
client: client,
collector: collector,
counter: counter,
maxTasks: maxTasks,
cpuThreshold: cpuThreshold,
memThreshold: memThreshold,
diskThreshold: diskThreshold,
notifyCh: make(chan struct{}, 1),
emptyBackoff: []time.Duration{5 * time.Second, 10 * time.Second, 30 * time.Second, 60 * time.Second},
errorBackoff: 1 * time.Second,
errorMax: 60 * time.Second,
randSrc: rand.New(rand.NewSource(time.Now().UnixNano())),
}
}
// SetOnTask registers a callback invoked when a task is assigned.
func (p *Puller) SetOnTask(fn func(*domain.Task)) {
p.onTask = fn
}
// NotifyTaskAvailable triggers an immediate pull attempt.
func (p *Puller) NotifyTaskAvailable() {
select {
case p.notifyCh <- struct{}{}:
default:
}
}
// Run starts the pull loop.
func (p *Puller) Run(ctx context.Context) error {
for {
if ctx.Err() != nil {
return ctx.Err()
}
if p.paused.Load() {
if !p.waitUntilCanceled(ctx) {
return ctx.Err()
}
continue
}
loadInterval := p.loadInterval()
if !p.canPull() {
if !p.wait(ctx, loadInterval) {
return ctx.Err()
}
continue
}
task, err := p.client.PullTask(ctx)
if err != nil {
delay := p.nextErrorBackoff()
if !p.wait(ctx, delay) {
return ctx.Err()
}
continue
}
p.resetErrorBackoff()
if task == nil {
delay := p.nextEmptyDelay(loadInterval)
if !p.waitOrNotify(ctx, delay) {
return ctx.Err()
}
continue
}
p.resetEmptyBackoff()
if p.onTask != nil {
p.onTask(task)
}
}
}
func (p *Puller) canPull() bool {
maxTasks, cpuThreshold, memThreshold, diskThreshold := p.currentConfig()
if p.counter != nil && p.counter.Count() >= maxTasks {
return false
}
cpu, mem, disk := p.collector.Sample()
return cpu < float64(cpuThreshold) &&
mem < float64(memThreshold) &&
disk < float64(diskThreshold)
}
func (p *Puller) loadInterval() time.Duration {
cpu, mem, disk := p.collector.Sample()
load := math.Max(cpu, math.Max(mem, disk))
switch {
case load < 50:
return 1 * time.Second
case load < 80:
return 3 * time.Second
default:
return 10 * time.Second
}
}
func (p *Puller) nextEmptyDelay(loadInterval time.Duration) time.Duration {
var empty time.Duration
if p.emptyIdx < len(p.emptyBackoff) {
empty = p.emptyBackoff[p.emptyIdx]
p.emptyIdx++
} else {
empty = p.emptyBackoff[len(p.emptyBackoff)-1]
}
if empty < loadInterval {
return loadInterval
}
return empty
}
func (p *Puller) resetEmptyBackoff() {
p.emptyIdx = 0
}
func (p *Puller) nextErrorBackoff() time.Duration {
delay := p.errorBackoff
next := delay * 2
if next > p.errorMax {
next = p.errorMax
}
p.errorBackoff = next
return withJitter(delay, p.randSrc)
}
func (p *Puller) resetErrorBackoff() {
p.errorBackoff = 1 * time.Second
}
func (p *Puller) wait(ctx context.Context, delay time.Duration) bool {
timer := time.NewTimer(delay)
defer timer.Stop()
select {
case <-ctx.Done():
return false
case <-timer.C:
return true
}
}
func (p *Puller) waitOrNotify(ctx context.Context, delay time.Duration) bool {
timer := time.NewTimer(delay)
defer timer.Stop()
select {
case <-ctx.Done():
return false
case <-p.notifyCh:
return true
case <-timer.C:
return true
}
}
func withJitter(delay time.Duration, src *rand.Rand) time.Duration {
if delay <= 0 || src == nil {
return delay
}
jitter := src.Float64() * 0.2
return delay + time.Duration(float64(delay)*jitter)
}
func (p *Puller) EnsureTaskHandler() error {
if p.onTask == nil {
return errors.New("task handler is required")
}
return nil
}
// Pause stops pulling. Once paused, only context cancellation exits the loop.
func (p *Puller) Pause() {
p.paused.Store(true)
}
// UpdateConfig updates puller thresholds and max tasks.
func (p *Puller) UpdateConfig(maxTasks, cpuThreshold, memThreshold, diskThreshold *int) {
p.mu.Lock()
defer p.mu.Unlock()
if maxTasks != nil && *maxTasks > 0 {
p.maxTasks = *maxTasks
}
if cpuThreshold != nil && *cpuThreshold > 0 {
p.cpuThreshold = *cpuThreshold
}
if memThreshold != nil && *memThreshold > 0 {
p.memThreshold = *memThreshold
}
if diskThreshold != nil && *diskThreshold > 0 {
p.diskThreshold = *diskThreshold
}
}
func (p *Puller) currentConfig() (int, int, int, int) {
p.mu.RLock()
defer p.mu.RUnlock()
return p.maxTasks, p.cpuThreshold, p.memThreshold, p.diskThreshold
}
func (p *Puller) waitUntilCanceled(ctx context.Context) bool {
<-ctx.Done()
return false
}

View File

@@ -0,0 +1,101 @@
package task
import (
"math/rand"
"testing"
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
func TestPullerUpdateConfig(t *testing.T) {
p := NewPuller(nil, nil, nil, 5, 85, 86, 87)
max, cpu, mem, disk := p.currentConfig()
if max != 5 || cpu != 85 || mem != 86 || disk != 87 {
t.Fatalf("unexpected initial config")
}
maxUpdate := 8
cpuUpdate := 70
p.UpdateConfig(&maxUpdate, &cpuUpdate, nil, nil)
max, cpu, mem, disk = p.currentConfig()
if max != 8 || cpu != 70 || mem != 86 || disk != 87 {
t.Fatalf("unexpected updated config")
}
}
func TestPullerPause(t *testing.T) {
p := NewPuller(nil, nil, nil, 1, 1, 1, 1)
p.Pause()
if !p.paused.Load() {
t.Fatalf("expected paused")
}
}
func TestPullerEnsureTaskHandler(t *testing.T) {
p := NewPuller(nil, nil, nil, 1, 1, 1, 1)
if err := p.EnsureTaskHandler(); err == nil {
t.Fatalf("expected error when handler missing")
}
p.SetOnTask(func(*domain.Task) {})
if err := p.EnsureTaskHandler(); err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func TestPullerNextEmptyDelay(t *testing.T) {
p := NewPuller(nil, nil, nil, 1, 1, 1, 1)
p.emptyBackoff = []time.Duration{5 * time.Second, 10 * time.Second}
if delay := p.nextEmptyDelay(8 * time.Second); delay != 8*time.Second {
t.Fatalf("expected delay to honor load interval, got %v", delay)
}
if delay := p.nextEmptyDelay(1 * time.Second); delay != 10*time.Second {
t.Fatalf("expected backoff delay, got %v", delay)
}
if p.emptyIdx != 2 {
t.Fatalf("expected empty index to advance")
}
p.resetEmptyBackoff()
if p.emptyIdx != 0 {
t.Fatalf("expected empty index reset")
}
}
func TestPullerErrorBackoff(t *testing.T) {
p := NewPuller(nil, nil, nil, 1, 1, 1, 1)
p.randSrc = rand.New(rand.NewSource(1))
first := p.nextErrorBackoff()
if first < time.Second || first > time.Second+(time.Second/5) {
t.Fatalf("unexpected backoff %v", first)
}
if p.errorBackoff != 2*time.Second {
t.Fatalf("expected backoff to double")
}
second := p.nextErrorBackoff()
if second < 2*time.Second || second > 2*time.Second+(2*time.Second/5) {
t.Fatalf("unexpected backoff %v", second)
}
if p.errorBackoff != 4*time.Second {
t.Fatalf("expected backoff to double")
}
p.resetErrorBackoff()
if p.errorBackoff != time.Second {
t.Fatalf("expected error backoff reset")
}
}
func TestWithJitterRange(t *testing.T) {
rng := rand.New(rand.NewSource(1))
delay := 10 * time.Second
got := withJitter(delay, rng)
if got < delay {
t.Fatalf("expected jitter >= delay")
}
if got > delay+(delay/5) {
t.Fatalf("expected jitter <= 20%%")
}
}

View File

@@ -0,0 +1,279 @@
package update
import (
"context"
"fmt"
"io"
"math/rand"
"os"
"strings"
"sync"
"time"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/strslice"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/yyhuni/lunafox/agent/internal/config"
"github.com/yyhuni/lunafox/agent/internal/domain"
"github.com/yyhuni/lunafox/agent/internal/logger"
"go.uber.org/zap"
)
// Updater handles agent self-update.
type Updater struct {
docker dockerClient
health healthSetter
puller pullerController
executor executorController
cfg configSnapshot
apiKey string
token string
mu sync.Mutex
updating bool
randSrc *rand.Rand
backoff time.Duration
maxBackoff time.Duration
}
type dockerClient interface {
ImagePull(ctx context.Context, imageRef string) (io.ReadCloser, error)
ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *ocispec.Platform, name string) (container.CreateResponse, error)
ContainerStart(ctx context.Context, containerID string, opts container.StartOptions) error
}
type healthSetter interface {
Set(state, reason, message string)
}
type pullerController interface {
Pause()
}
type executorController interface {
Shutdown(ctx context.Context) error
}
type configSnapshot interface {
Snapshot() config.Config
}
// NewUpdater creates a new updater.
func NewUpdater(dockerClient dockerClient, healthManager healthSetter, puller pullerController, executor executorController, cfg configSnapshot, apiKey, token string) *Updater {
return &Updater{
docker: dockerClient,
health: healthManager,
puller: puller,
executor: executor,
cfg: cfg,
apiKey: apiKey,
token: token,
randSrc: rand.New(rand.NewSource(time.Now().UnixNano())),
backoff: 30 * time.Second,
maxBackoff: 10 * time.Minute,
}
}
// HandleUpdateRequired triggers the update flow.
func (u *Updater) HandleUpdateRequired(payload domain.UpdateRequiredPayload) {
u.mu.Lock()
if u.updating {
u.mu.Unlock()
return
}
u.updating = true
u.mu.Unlock()
go u.run(payload)
}
func (u *Updater) run(payload domain.UpdateRequiredPayload) {
defer func() {
if r := recover(); r != nil {
logger.Log.Error("agent update panic", zap.Any("panic", r))
u.health.Set("paused", "update_panic", fmt.Sprintf("%v", r))
}
u.mu.Lock()
u.updating = false
u.mu.Unlock()
}()
u.puller.Pause()
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
_ = u.executor.Shutdown(ctx)
cancel()
for {
if err := u.updateOnce(payload); err == nil {
u.health.Set("ok", "", "")
os.Exit(0)
} else {
u.health.Set("paused", "update_failed", err.Error())
}
delay := withJitter(u.backoff, u.randSrc)
if u.backoff < u.maxBackoff {
u.backoff *= 2
if u.backoff > u.maxBackoff {
u.backoff = u.maxBackoff
}
}
time.Sleep(delay)
}
}
func (u *Updater) updateOnce(payload domain.UpdateRequiredPayload) error {
if u.docker == nil {
return fmt.Errorf("docker client unavailable")
}
image := strings.TrimSpace(payload.Image)
version := strings.TrimSpace(payload.Version)
if image == "" || version == "" {
return fmt.Errorf("invalid update payload")
}
// Strict validation: reject invalid data from server
if err := validateImageName(image); err != nil {
logger.Log.Warn("invalid image name from server", zap.String("image", image), zap.Error(err))
return fmt.Errorf("invalid image name from server: %w", err)
}
if err := validateVersion(version); err != nil {
logger.Log.Warn("invalid version from server", zap.String("version", version), zap.Error(err))
return fmt.Errorf("invalid version from server: %w", err)
}
fullImage := fmt.Sprintf("%s:%s", image, version)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
reader, err := u.docker.ImagePull(ctx, fullImage)
if err != nil {
return err
}
_, _ = io.Copy(io.Discard, reader)
_ = reader.Close()
if err := u.startNewContainer(ctx, image, version); err != nil {
return err
}
return nil
}
func (u *Updater) startNewContainer(ctx context.Context, image, version string) error {
env := []string{
fmt.Sprintf("SERVER_URL=%s", u.cfg.Snapshot().ServerURL),
fmt.Sprintf("API_KEY=%s", u.apiKey),
fmt.Sprintf("MAX_TASKS=%d", u.cfg.Snapshot().MaxTasks),
fmt.Sprintf("CPU_THRESHOLD=%d", u.cfg.Snapshot().CPUThreshold),
fmt.Sprintf("MEM_THRESHOLD=%d", u.cfg.Snapshot().MemThreshold),
fmt.Sprintf("DISK_THRESHOLD=%d", u.cfg.Snapshot().DiskThreshold),
fmt.Sprintf("AGENT_VERSION=%s", version),
}
if u.token != "" {
env = append(env, fmt.Sprintf("WORKER_TOKEN=%s", u.token))
}
cfg := &container.Config{
Image: fmt.Sprintf("%s:%s", image, version),
Env: env,
Cmd: strslice.StrSlice{},
}
hostConfig := &container.HostConfig{
Binds: []string{
"/var/run/docker.sock:/var/run/docker.sock",
"/opt/lunafox:/opt/lunafox",
},
RestartPolicy: container.RestartPolicy{Name: "unless-stopped"},
OomScoreAdj: -500,
}
// Version is already validated, just normalize to lowercase for container name
name := fmt.Sprintf("lunafox-agent-%s", strings.ToLower(version))
resp, err := u.docker.ContainerCreate(ctx, cfg, hostConfig, &network.NetworkingConfig{}, nil, name)
if err != nil {
return err
}
if err := u.docker.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil {
return err
}
logger.Log.Info("agent update started new container", zap.String("containerId", resp.ID))
return nil
}
func withJitter(delay time.Duration, src *rand.Rand) time.Duration {
if delay <= 0 || src == nil {
return delay
}
jitter := src.Float64() * 0.2
return delay + time.Duration(float64(delay)*jitter)
}
// validateImageName validates that the image name contains only safe characters.
// Returns error if validation fails.
func validateImageName(image string) error {
if len(image) == 0 {
return fmt.Errorf("image name cannot be empty")
}
if len(image) > 255 {
return fmt.Errorf("image name too long: %d characters", len(image))
}
// Allow: alphanumeric, dots, hyphens, underscores, slashes (for registry paths)
for i, r := range image {
if !((r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '.' || r == '-' || r == '_' || r == '/') {
return fmt.Errorf("invalid character at position %d: %c", i, r)
}
}
// Must not start or end with special characters
first := rune(image[0])
last := rune(image[len(image)-1])
if first == '.' || first == '-' || first == '/' {
return fmt.Errorf("image name cannot start with special character: %c", first)
}
if last == '.' || last == '-' || last == '/' {
return fmt.Errorf("image name cannot end with special character: %c", last)
}
return nil
}
// validateVersion validates that the version string contains only safe characters.
// Returns error if validation fails.
func validateVersion(version string) error {
if len(version) == 0 {
return fmt.Errorf("version cannot be empty")
}
if len(version) > 128 {
return fmt.Errorf("version too long: %d characters", len(version))
}
// Allow: alphanumeric, dots, hyphens, underscores
for i, r := range version {
if !((r >= 'a' && r <= 'z') ||
(r >= 'A' && r <= 'Z') ||
(r >= '0' && r <= '9') ||
r == '.' || r == '-' || r == '_') {
return fmt.Errorf("invalid character at position %d: %c", i, r)
}
}
// Must not start or end with special characters
first := rune(version[0])
last := rune(version[len(version)-1])
if first == '.' || first == '-' || first == '_' {
return fmt.Errorf("version cannot start with special character: %c", first)
}
if last == '.' || last == '-' || last == '_' {
return fmt.Errorf("version cannot end with special character: %c", last)
}
return nil
}

View File

@@ -0,0 +1,45 @@
package update
import (
"math/rand"
"strings"
"testing"
"time"
"github.com/yyhuni/lunafox/agent/internal/domain"
)
func TestSanitizeContainerName(t *testing.T) {
got := sanitizeContainerName("v1.0.0+TEST")
if got == "" {
t.Fatalf("expected sanitized name")
}
if got == "v1.0.0+test" {
t.Fatalf("expected sanitized to replace invalid chars")
}
}
func TestWithJitterRange(t *testing.T) {
rng := rand.New(rand.NewSource(1))
delay := 10 * time.Second
got := withJitter(delay, rng)
if got < delay {
t.Fatalf("expected jitter >= delay")
}
if got > delay+(delay/5) {
t.Fatalf("expected jitter <= 20%%")
}
}
func TestUpdateOnceDockerUnavailable(t *testing.T) {
updater := &Updater{}
payload := domain.UpdateRequiredPayload{Version: "v1.0.0", Image: "yyhuni/lunafox-agent"}
err := updater.updateOnce(payload)
if err == nil {
t.Fatalf("expected error when docker client is nil")
}
if !strings.Contains(err.Error(), "docker client unavailable") {
t.Fatalf("unexpected error: %v", err)
}
}

View File

@@ -0,0 +1,37 @@
package websocket
import "time"
// Backoff implements exponential backoff with a maximum cap.
type Backoff struct {
base time.Duration
max time.Duration
current time.Duration
}
// NewBackoff creates a backoff with the given base and max delay.
func NewBackoff(base, max time.Duration) Backoff {
return Backoff{
base: base,
max: max,
}
}
// Next returns the next backoff duration.
func (b *Backoff) Next() time.Duration {
if b.current <= 0 {
b.current = b.base
return b.current
}
next := b.current * 2
if next > b.max {
next = b.max
}
b.current = next
return b.current
}
// Reset clears the backoff to start over.
func (b *Backoff) Reset() {
b.current = 0
}

View File

@@ -0,0 +1,32 @@
package websocket
import (
"testing"
"time"
)
func TestBackoffSequence(t *testing.T) {
b := NewBackoff(time.Second, 60*time.Second)
expected := []time.Duration{
time.Second,
2 * time.Second,
4 * time.Second,
8 * time.Second,
16 * time.Second,
32 * time.Second,
60 * time.Second,
60 * time.Second,
}
for i, exp := range expected {
if got := b.Next(); got != exp {
t.Fatalf("step %d: expected %v, got %v", i, exp, got)
}
}
b.Reset()
if got := b.Next(); got != time.Second {
t.Fatalf("after reset expected %v, got %v", time.Second, got)
}
}

View File

@@ -0,0 +1,177 @@
package websocket
import (
"context"
"crypto/tls"
"net/http"
"time"
"github.com/gorilla/websocket"
"github.com/yyhuni/lunafox/agent/internal/logger"
"go.uber.org/zap"
)
const (
defaultPingInterval = 30 * time.Second
defaultPongWait = 60 * time.Second
defaultWriteWait = 10 * time.Second
)
// Client maintains a WebSocket connection to the server.
type Client struct {
wsURL string
apiKey string
dialer *websocket.Dialer
send chan []byte
onMessage func([]byte)
backoff Backoff
pingInterval time.Duration
pongWait time.Duration
writeWait time.Duration
}
// NewClient creates a WebSocket client for the agent.
func NewClient(wsURL, apiKey string) *Client {
dialer := *websocket.DefaultDialer
dialer.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
return &Client{
wsURL: wsURL,
apiKey: apiKey,
dialer: &dialer,
send: make(chan []byte, 256),
backoff: NewBackoff(1*time.Second, 60*time.Second),
pingInterval: defaultPingInterval,
pongWait: defaultPongWait,
writeWait: defaultWriteWait,
}
}
// SetOnMessage registers a callback for incoming messages.
func (c *Client) SetOnMessage(fn func([]byte)) {
c.onMessage = fn
}
// Send queues a message for sending. It returns false if the buffer is full.
func (c *Client) Send(payload []byte) bool {
select {
case c.send <- payload:
return true
default:
return false
}
}
// Run keeps the connection alive with reconnect backoff and keepalive pings.
func (c *Client) Run(ctx context.Context) error {
for {
if ctx.Err() != nil {
return ctx.Err()
}
logger.Log.Info("websocket connect attempt", zap.String("url", c.wsURL))
conn, err := c.connect(ctx)
if err != nil {
logger.Log.Warn("websocket connect failed", zap.Error(err))
if !sleepWithContext(ctx, c.backoff.Next()) {
return ctx.Err()
}
continue
}
c.backoff.Reset()
logger.Log.Info("websocket connected")
err = c.runConn(ctx, conn)
if err != nil && ctx.Err() == nil {
logger.Log.Warn("websocket connection closed", zap.Error(err))
}
if ctx.Err() != nil {
return ctx.Err()
}
if !sleepWithContext(ctx, c.backoff.Next()) {
return ctx.Err()
}
}
}
func (c *Client) connect(ctx context.Context) (*websocket.Conn, error) {
header := http.Header{}
if c.apiKey != "" {
header.Set("X-Agent-Key", c.apiKey)
}
conn, _, err := c.dialer.DialContext(ctx, c.wsURL, header)
return conn, err
}
func (c *Client) runConn(ctx context.Context, conn *websocket.Conn) error {
defer conn.Close()
conn.SetReadDeadline(time.Now().Add(c.pongWait))
conn.SetPongHandler(func(string) error {
conn.SetReadDeadline(time.Now().Add(c.pongWait))
return nil
})
errCh := make(chan error, 2)
go c.readLoop(conn, errCh)
go c.writeLoop(ctx, conn, errCh)
select {
case <-ctx.Done():
return ctx.Err()
case err := <-errCh:
return err
}
}
func (c *Client) readLoop(conn *websocket.Conn, errCh chan<- error) {
for {
_, message, err := conn.ReadMessage()
if err != nil {
errCh <- err
return
}
if c.onMessage != nil {
c.onMessage(message)
}
}
}
func (c *Client) writeLoop(ctx context.Context, conn *websocket.Conn, errCh chan<- error) {
ticker := time.NewTicker(c.pingInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
errCh <- ctx.Err()
return
case payload := <-c.send:
if err := c.writeMessage(conn, websocket.TextMessage, payload); err != nil {
errCh <- err
return
}
case <-ticker.C:
if err := c.writeMessage(conn, websocket.PingMessage, nil); err != nil {
errCh <- err
return
}
}
}
}
func (c *Client) writeMessage(conn *websocket.Conn, msgType int, payload []byte) error {
_ = conn.SetWriteDeadline(time.Now().Add(c.writeWait))
return conn.WriteMessage(msgType, payload)
}
func sleepWithContext(ctx context.Context, delay time.Duration) bool {
timer := time.NewTimer(delay)
defer timer.Stop()
select {
case <-ctx.Done():
return false
case <-timer.C:
return true
}
}

View File

@@ -0,0 +1,32 @@
package websocket
import (
"context"
"testing"
"time"
)
func TestClientSendBufferFull(t *testing.T) {
client := &Client{send: make(chan []byte, 1)}
if !client.Send([]byte("first")) {
t.Fatalf("expected first send to succeed")
}
if client.Send([]byte("second")) {
t.Fatalf("expected second send to fail when buffer is full")
}
}
func TestSleepWithContextCancelled(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
cancel()
if sleepWithContext(ctx, 50*time.Millisecond) {
t.Fatalf("expected sleepWithContext to return false when canceled")
}
}
func TestSleepWithContextElapsed(t *testing.T) {
if !sleepWithContext(context.Background(), 5*time.Millisecond) {
t.Fatalf("expected sleepWithContext to return true after delay")
}
}

View File

@@ -0,0 +1,90 @@
package websocket
import (
"encoding/json"
"github.com/yyhuni/lunafox/agent/internal/protocol"
)
// Handler routes incoming WebSocket messages.
type Handler struct {
onTaskAvailable func()
onTaskCancel func(int)
onConfigUpdate func(protocol.ConfigUpdatePayload)
onUpdateReq func(protocol.UpdateRequiredPayload)
}
// NewHandler creates a message handler.
func NewHandler() *Handler {
return &Handler{}
}
// OnTaskAvailable registers a callback for task_available messages.
func (h *Handler) OnTaskAvailable(fn func()) {
h.onTaskAvailable = fn
}
// OnTaskCancel registers a callback for task_cancel messages.
func (h *Handler) OnTaskCancel(fn func(int)) {
h.onTaskCancel = fn
}
// OnConfigUpdate registers a callback for config_update messages.
func (h *Handler) OnConfigUpdate(fn func(protocol.ConfigUpdatePayload)) {
h.onConfigUpdate = fn
}
// OnUpdateRequired registers a callback for update_required messages.
func (h *Handler) OnUpdateRequired(fn func(protocol.UpdateRequiredPayload)) {
h.onUpdateReq = fn
}
// Handle processes a raw message.
func (h *Handler) Handle(raw []byte) {
var msg struct {
Type string `json:"type"`
Data json.RawMessage `json:"payload"`
}
if err := json.Unmarshal(raw, &msg); err != nil {
return
}
switch msg.Type {
case protocol.MessageTypeTaskAvailable:
if h.onTaskAvailable != nil {
h.onTaskAvailable()
}
case protocol.MessageTypeTaskCancel:
if h.onTaskCancel == nil {
return
}
var payload protocol.TaskCancelPayload
if err := json.Unmarshal(msg.Data, &payload); err != nil {
return
}
if payload.TaskID > 0 {
h.onTaskCancel(payload.TaskID)
}
case protocol.MessageTypeConfigUpdate:
if h.onConfigUpdate == nil {
return
}
var payload protocol.ConfigUpdatePayload
if err := json.Unmarshal(msg.Data, &payload); err != nil {
return
}
h.onConfigUpdate(payload)
case protocol.MessageTypeUpdateRequired:
if h.onUpdateReq == nil {
return
}
var payload protocol.UpdateRequiredPayload
if err := json.Unmarshal(msg.Data, &payload); err != nil {
return
}
if payload.Version == "" || payload.Image == "" {
return
}
h.onUpdateReq(payload)
}
}

View File

@@ -0,0 +1,85 @@
package websocket
import (
"fmt"
"testing"
"github.com/yyhuni/lunafox/agent/internal/protocol"
)
func TestHandlersTaskAvailable(t *testing.T) {
h := NewHandler()
called := 0
h.OnTaskAvailable(func() { called++ })
message := fmt.Sprintf(`{"type":"%s","payload":{},"timestamp":"2026-01-01T00:00:00Z"}`, protocol.MessageTypeTaskAvailable)
h.Handle([]byte(message))
if called != 1 {
t.Fatalf("expected callback to be called")
}
}
func TestHandlersTaskCancel(t *testing.T) {
h := NewHandler()
var got int
h.OnTaskCancel(func(id int) { got = id })
message := fmt.Sprintf(`{"type":"%s","payload":{"taskId":123},"timestamp":"2026-01-01T00:00:00Z"}`, protocol.MessageTypeTaskCancel)
h.Handle([]byte(message))
if got != 123 {
t.Fatalf("expected taskId 123")
}
}
func TestHandlersConfigUpdate(t *testing.T) {
h := NewHandler()
var maxTasks int
h.OnConfigUpdate(func(payload protocol.ConfigUpdatePayload) {
if payload.MaxTasks != nil {
maxTasks = *payload.MaxTasks
}
})
message := fmt.Sprintf(`{"type":"%s","payload":{"maxTasks":8},"timestamp":"2026-01-01T00:00:00Z"}`, protocol.MessageTypeConfigUpdate)
h.Handle([]byte(message))
if maxTasks != 8 {
t.Fatalf("expected maxTasks 8")
}
}
func TestHandlersUpdateRequired(t *testing.T) {
h := NewHandler()
var version string
h.OnUpdateRequired(func(payload protocol.UpdateRequiredPayload) { version = payload.Version })
message := fmt.Sprintf(`{"type":"%s","payload":{"version":"v1.0.1","image":"yyhuni/lunafox-agent"},"timestamp":"2026-01-01T00:00:00Z"}`, protocol.MessageTypeUpdateRequired)
h.Handle([]byte(message))
if version != "v1.0.1" {
t.Fatalf("expected version")
}
}
func TestHandlersIgnoreInvalidJSON(t *testing.T) {
h := NewHandler()
called := 0
h.OnTaskAvailable(func() { called++ })
h.Handle([]byte("{bad json"))
if called != 0 {
t.Fatalf("expected no callbacks on invalid json")
}
}
func TestHandlersUpdateRequiredMissingFields(t *testing.T) {
h := NewHandler()
called := 0
h.OnUpdateRequired(func(payload protocol.UpdateRequiredPayload) { called++ })
message := fmt.Sprintf(`{"type":"%s","payload":{"version":"","image":"yyhuni/lunafox-agent"}}`, protocol.MessageTypeUpdateRequired)
h.Handle([]byte(message))
message = fmt.Sprintf(`{"type":"%s","payload":{"version":"v1.2.3","image":""}}`, protocol.MessageTypeUpdateRequired)
h.Handle([]byte(message))
if called != 0 {
t.Fatalf("expected no callbacks for invalid payload")
}
}

View File

@@ -0,0 +1,97 @@
package websocket
import (
"context"
"encoding/json"
"time"
"github.com/yyhuni/lunafox/agent/internal/health"
"github.com/yyhuni/lunafox/agent/internal/logger"
"github.com/yyhuni/lunafox/agent/internal/metrics"
"github.com/yyhuni/lunafox/agent/internal/protocol"
"go.uber.org/zap"
)
// HeartbeatSender sends periodic heartbeat messages over WebSocket.
type HeartbeatSender struct {
client *Client
collector *metrics.Collector
health *health.Manager
version string
hostname string
startedAt time.Time
taskCount func() int
interval time.Duration
lastSentAt time.Time
}
// NewHeartbeatSender creates a heartbeat sender.
func NewHeartbeatSender(client *Client, collector *metrics.Collector, healthManager *health.Manager, version, hostname string, taskCount func() int) *HeartbeatSender {
return &HeartbeatSender{
client: client,
collector: collector,
health: healthManager,
version: version,
hostname: hostname,
startedAt: time.Now(),
taskCount: taskCount,
interval: 5 * time.Second,
}
}
// Start begins sending heartbeats until context is canceled.
func (h *HeartbeatSender) Start(ctx context.Context) {
ticker := time.NewTicker(h.interval)
defer ticker.Stop()
h.sendOnce()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
h.sendOnce()
}
}
}
func (h *HeartbeatSender) sendOnce() {
cpu, mem, disk := h.collector.Sample()
uptime := int64(time.Since(h.startedAt).Seconds())
tasks := 0
if h.taskCount != nil {
tasks = h.taskCount()
}
status := h.health.Get()
payload := protocol.HeartbeatPayload{
CPU: cpu,
Mem: mem,
Disk: disk,
Tasks: tasks,
Version: h.version,
Hostname: h.hostname,
Uptime: uptime,
Health: protocol.HealthStatus{
State: status.State,
Reason: status.Reason,
Message: status.Message,
Since: status.Since,
},
}
msg := protocol.Message{
Type: protocol.MessageTypeHeartbeat,
Payload: payload,
Timestamp: time.Now().UTC(),
}
data, err := json.Marshal(msg)
if err != nil {
logger.Log.Warn("failed to marshal heartbeat message", zap.Error(err))
return
}
if !h.client.Send(data) {
logger.Log.Warn("failed to send heartbeat: client not connected")
}
}

View File

@@ -0,0 +1,57 @@
package websocket
import (
"encoding/json"
"testing"
"time"
"github.com/yyhuni/lunafox/agent/internal/health"
"github.com/yyhuni/lunafox/agent/internal/metrics"
"github.com/yyhuni/lunafox/agent/internal/protocol"
)
func TestHeartbeatSenderSendOnce(t *testing.T) {
client := &Client{send: make(chan []byte, 1)}
collector := metrics.NewCollector()
healthManager := health.NewManager()
healthManager.Set("paused", "maintenance", "waiting")
sender := NewHeartbeatSender(client, collector, healthManager, "v1.0.0", "agent-host", func() int { return 3 })
sender.sendOnce()
select {
case payload := <-client.send:
var msg struct {
Type string `json:"type"`
Payload map[string]interface{} `json:"payload"`
Timestamp time.Time `json:"timestamp"`
}
if err := json.Unmarshal(payload, &msg); err != nil {
t.Fatalf("unmarshal heartbeat: %v", err)
}
if msg.Type != protocol.MessageTypeHeartbeat {
t.Fatalf("expected heartbeat type, got %s", msg.Type)
}
if msg.Timestamp.IsZero() {
t.Fatalf("expected timestamp")
}
if msg.Payload["version"] != "v1.0.0" {
t.Fatalf("expected version in payload")
}
if msg.Payload["hostname"] != "agent-host" {
t.Fatalf("expected hostname in payload")
}
if tasks, ok := msg.Payload["tasks"].(float64); !ok || int(tasks) != 3 {
t.Fatalf("expected tasks=3")
}
healthPayload, ok := msg.Payload["health"].(map[string]interface{})
if !ok {
t.Fatalf("expected health payload")
}
if healthPayload["state"] != "paused" {
t.Fatalf("expected health state paused")
}
default:
t.Fatalf("expected heartbeat message")
}
}

View File

@@ -0,0 +1,13 @@
package integration
import (
"os"
"testing"
)
func TestTaskExecutionFlow(t *testing.T) {
if os.Getenv("AGENT_INTEGRATION") == "" {
t.Skip("set AGENT_INTEGRATION=1 to run integration tests")
}
// TODO: wire up real server + docker environment for end-to-end validation.
}

View File

@@ -1,391 +0,0 @@
"""
指纹识别 Flow
负责编排指纹识别的完整流程
架构:
- Flow 负责编排多个原子 Task
- 在 site_scan 后串行执行
- 使用 xingfinger 工具识别技术栈
- 流式处理输出,批量更新数据库
"""
import logging
from datetime import datetime
from pathlib import Path
from prefect import flow
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
on_scan_flow_running,
)
from apps.scan.tasks.fingerprint_detect import (
export_urls_for_fingerprint_task,
run_xingfinger_and_stream_update_tech_task,
)
from apps.scan.utils import build_scan_command, user_log, wait_for_system_load
from apps.scan.utils.fingerprint_helpers import get_fingerprint_paths
logger = logging.getLogger(__name__)
def calculate_fingerprint_detect_timeout(
url_count: int,
base_per_url: float = 10.0,
min_timeout: int = 300
) -> int:
"""
根据 URL 数量计算超时时间
公式:超时时间 = URL 数量 × 每 URL 基础时间
最小值300秒无上限
Args:
url_count: URL 数量
base_per_url: 每 URL 基础时间(秒),默认 10秒
min_timeout: 最小超时时间(秒),默认 300秒
Returns:
int: 计算出的超时时间(秒)
"""
return max(min_timeout, int(url_count * base_per_url))
def _export_urls(
target_id: int,
fingerprint_dir: Path,
source: str = 'website'
) -> tuple[str, int]:
"""
导出 URL 到文件
Args:
target_id: 目标 ID
fingerprint_dir: 指纹识别目录
source: 数据源类型
Returns:
tuple: (urls_file, total_count)
"""
logger.info("Step 1: 导出 URL 列表 (source=%s)", source)
urls_file = str(fingerprint_dir / 'urls.txt')
export_result = export_urls_for_fingerprint_task(
target_id=target_id,
output_file=urls_file,
source=source,
batch_size=1000
)
total_count = export_result['total_count']
logger.info(
"✓ URL 导出完成 - 文件: %s, 数量: %d",
export_result['output_file'],
total_count
)
return export_result['output_file'], total_count
def _run_fingerprint_detect(
enabled_tools: dict,
urls_file: str,
url_count: int,
fingerprint_dir: Path,
scan_id: int,
target_id: int,
source: str
) -> tuple[dict, list]:
"""
执行指纹识别任务
Args:
enabled_tools: 已启用的工具配置字典
urls_file: URL 文件路径
url_count: URL 总数
fingerprint_dir: 指纹识别目录
scan_id: 扫描任务 ID
target_id: 目标 ID
source: 数据源类型
Returns:
tuple: (tool_stats, failed_tools)
"""
tool_stats = {}
failed_tools = []
for tool_name, tool_config in enabled_tools.items():
# 1. 获取指纹库路径
lib_names = tool_config.get('fingerprint_libs', ['ehole'])
fingerprint_paths = get_fingerprint_paths(lib_names)
if not fingerprint_paths:
reason = f"没有可用的指纹库: {lib_names}"
logger.warning(reason)
failed_tools.append({'tool': tool_name, 'reason': reason})
continue
# 2. 将指纹库路径合并到 tool_config用于命令构建
tool_config_with_paths = {**tool_config, **fingerprint_paths}
# 3. 构建命令
try:
command = build_scan_command(
tool_name=tool_name,
scan_type='fingerprint_detect',
command_params={'urls_file': urls_file},
tool_config=tool_config_with_paths
)
except Exception as e:
reason = f"命令构建失败: {e}"
logger.error("构建 %s 命令失败: %s", tool_name, e)
failed_tools.append({'tool': tool_name, 'reason': reason})
continue
# 4. 计算超时时间
timeout = calculate_fingerprint_detect_timeout(url_count)
# 5. 生成日志文件路径
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
log_file = fingerprint_dir / f"{tool_name}_{timestamp}.log"
logger.info(
"开始执行 %s 指纹识别 - URL数: %d, 超时: %ds, 指纹库: %s",
tool_name, url_count, timeout, list(fingerprint_paths.keys())
)
user_log(scan_id, "fingerprint_detect", f"Running {tool_name}: {command}")
# 6. 执行扫描任务
try:
result = run_xingfinger_and_stream_update_tech_task(
cmd=command,
tool_name=tool_name,
scan_id=scan_id,
target_id=target_id,
source=source,
cwd=str(fingerprint_dir),
timeout=timeout,
log_file=str(log_file),
batch_size=100
)
tool_stats[tool_name] = {
'command': command,
'result': result,
'timeout': timeout,
'fingerprint_libs': list(fingerprint_paths.keys())
}
tool_updated = result.get('updated_count', 0)
logger.info(
"✓ 工具 %s 执行完成 - 处理记录: %d, 更新: %d, 未找到: %d",
tool_name,
result.get('processed_records', 0),
tool_updated,
result.get('not_found_count', 0)
)
user_log(
scan_id, "fingerprint_detect",
f"{tool_name} completed: identified {tool_updated} fingerprints"
)
except Exception as exc:
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "fingerprint_detect", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
"以下指纹识别工具执行失败: %s",
', '.join([f['tool'] for f in failed_tools])
)
return tool_stats, failed_tools
@flow(
name="fingerprint_detect",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def fingerprint_detect_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: dict
) -> dict:
"""
指纹识别 Flow
主要功能:
1. 从数据库导出目标下所有 WebSite URL 到文件
2. 使用 xingfinger 进行技术栈识别
3. 解析结果并更新 WebSite.tech 字段(合并去重)
工作流程:
Step 0: 创建工作目录
Step 1: 导出 URL 列表
Step 2: 解析配置,获取启用的工具
Step 3: 执行 xingfinger 并解析结果
Args:
scan_id: 扫描任务 ID
target_name: 目标名称
target_id: 目标 ID
scan_workspace_dir: 扫描工作空间目录
enabled_tools: 启用的工具配置xingfinger
Returns:
dict: 扫描结果
"""
try:
# 负载检查:等待系统资源充足
wait_for_system_load(context="fingerprint_detect_flow")
logger.info(
"开始指纹识别 - Scan ID: %s, Target: %s, Workspace: %s",
scan_id, target_name, scan_workspace_dir
)
user_log(scan_id, "fingerprint_detect", "Starting fingerprint detection")
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
# 数据源类型(当前只支持 website
source = 'website'
# Step 0: 创建工作目录
from apps.scan.utils import setup_scan_directory
fingerprint_dir = setup_scan_directory(scan_workspace_dir, 'fingerprint_detect')
# Step 1: 导出 URL支持懒加载
urls_file, url_count = _export_urls(target_id, fingerprint_dir, source)
if url_count == 0:
logger.warning("跳过指纹识别:没有 URL 可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "fingerprint_detect", "Skipped: no URLs to scan", "warning")
return _build_empty_result(scan_id, target_name, scan_workspace_dir, urls_file)
# Step 2: 工具配置信息
logger.info("Step 2: 工具配置信息")
logger.info("✓ 启用工具: %s", ', '.join(enabled_tools.keys()))
# Step 3: 执行指纹识别
logger.info("Step 3: 执行指纹识别")
tool_stats, failed_tools = _run_fingerprint_detect(
enabled_tools=enabled_tools,
urls_file=urls_file,
url_count=url_count,
fingerprint_dir=fingerprint_dir,
scan_id=scan_id,
target_id=target_id,
source=source
)
# 动态生成已执行的任务列表
executed_tasks = ['export_urls_for_fingerprint']
executed_tasks.extend([f'run_xingfinger ({tool})' for tool in tool_stats])
# 汇总所有工具的结果
total_processed = sum(
stats['result'].get('processed_records', 0) for stats in tool_stats.values()
)
total_updated = sum(
stats['result'].get('updated_count', 0) for stats in tool_stats.values()
)
total_created = sum(
stats['result'].get('created_count', 0) for stats in tool_stats.values()
)
total_snapshots = sum(
stats['result'].get('snapshot_count', 0) for stats in tool_stats.values()
)
# 记录 Flow 完成
logger.info("✓ 指纹识别完成 - 识别指纹: %d", total_updated)
user_log(
scan_id, "fingerprint_detect",
f"fingerprint_detect completed: identified {total_updated} fingerprints"
)
successful_tools = [
name for name in enabled_tools
if name not in [f['tool'] for f in failed_tools]
]
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'urls_file': urls_file,
'url_count': url_count,
'processed_records': total_processed,
'updated_count': total_updated,
'created_count': total_created,
'snapshot_count': total_snapshots,
'executed_tasks': executed_tasks,
'tool_stats': {
'total': len(enabled_tools),
'successful': len(successful_tools),
'failed': len(failed_tools),
'successful_tools': successful_tools,
'failed_tools': failed_tools,
'details': tool_stats
}
}
except ValueError as e:
logger.error("配置错误: %s", e)
raise
except RuntimeError as e:
logger.error("运行时错误: %s", e)
raise
except Exception as e:
logger.exception("指纹识别失败: %s", e)
raise
def _build_empty_result(
scan_id: int,
target_name: str,
scan_workspace_dir: str,
urls_file: str
) -> dict:
"""构建空结果(无 URL 可扫描时)"""
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'urls_file': urls_file,
'url_count': 0,
'processed_records': 0,
'updated_count': 0,
'created_count': 0,
'snapshot_count': 0,
'executed_tasks': ['export_urls_for_fingerprint'],
'tool_stats': {
'total': 0,
'successful': 0,
'failed': 0,
'successful_tools': [],
'failed_tools': [],
'details': {}
}
}

View File

@@ -1,284 +0,0 @@
"""
扫描初始化 Flow
负责编排扫描任务的初始化流程
职责:
- 使用 FlowOrchestrator 解析 YAML 配置
- 在 Prefect Flow 中执行子 FlowSubflow
- 按照 YAML 顺序编排工作流
- 不包含具体业务逻辑(由 Tasks 和 FlowOrchestrator 实现)
架构:
- Flow: Prefect 编排层(本文件)
- FlowOrchestrator: 配置解析和执行计划apps/scan/services/
- Tasks: 执行层apps/scan/tasks/
- Handlers: 状态管理apps/scan/handlers/
"""
# Django 环境初始化(导入即生效)
# 注意:动态扫描容器应使用 run_initiate_scan.py 启动,以便在导入前设置环境变量
from apps.common.prefect_django_setup import setup_django_for_prefect
from prefect import flow, task
from pathlib import Path
import logging
from apps.scan.handlers import (
on_initiate_scan_flow_running,
on_initiate_scan_flow_completed,
on_initiate_scan_flow_failed,
)
from prefect.futures import wait
from apps.scan.utils import setup_scan_workspace
from apps.scan.orchestrators import FlowOrchestrator
logger = logging.getLogger(__name__)
@task(name="run_subflow")
def _run_subflow_task(scan_type: str, flow_func, flow_kwargs: dict):
"""包装子 Flow 的 Task用于在并行阶段并发执行子 Flow。"""
logger.info("开始执行子 Flow: %s", scan_type)
return flow_func(**flow_kwargs)
@flow(
name='initiate_scan',
description='扫描任务初始化流程',
log_prints=True,
on_running=[on_initiate_scan_flow_running],
on_completion=[on_initiate_scan_flow_completed],
on_failure=[on_initiate_scan_flow_failed],
)
def initiate_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
engine_name: str,
scheduled_scan_name: str | None = None,
) -> dict:
"""
初始化扫描任务(动态工作流编排)
根据 YAML 配置动态编排工作流:
- 从数据库获取 engine_config (YAML)
- 检测启用的扫描类型
- 按照定义的阶段执行:
Stage 1: Discovery (顺序执行)
- subdomain_discovery
- port_scan
- site_scan
Stage 2: Analysis (并行执行)
- url_fetch
- directory_scan
Args:
scan_id: 扫描任务 ID
target_name: 目标名称
target_id: 目标 ID
scan_workspace_dir: Scan 工作空间目录路径
engine_name: 引擎名称(用于显示)
scheduled_scan_name: 定时扫描任务名称(可选,用于通知显示)
Returns:
dict: 执行结果摘要
Raises:
ValueError: 参数验证失败或配置无效
RuntimeError: 执行失败
"""
try:
# ==================== 参数验证 ====================
if not scan_id:
raise ValueError("scan_id is required")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir is required")
if not engine_name:
raise ValueError("engine_name is required")
logger.info("="*60)
logger.info("开始初始化扫描任务")
logger.info(f"Scan ID: {scan_id}")
logger.info(f"Target: {target_name}")
logger.info(f"Engine: {engine_name}")
logger.info(f"Workspace: {scan_workspace_dir}")
logger.info("="*60)
# ==================== Task 1: 创建 Scan 工作空间 ====================
scan_workspace_path = setup_scan_workspace(scan_workspace_dir)
# ==================== Task 2: 获取引擎配置 ====================
from apps.scan.models import Scan
scan = Scan.objects.get(id=scan_id)
engine_config = scan.yaml_configuration
# 使用 engine_names 进行显示
display_engine_name = ', '.join(scan.engine_names) if scan.engine_names else engine_name
# ==================== Task 3: 解析配置,生成执行计划 ====================
orchestrator = FlowOrchestrator(engine_config)
# FlowOrchestrator 已经解析了所有工具配置
enabled_tools_by_type = orchestrator.enabled_tools_by_type
logger.info("执行计划生成成功")
logger.info(f"扫描类型: {''.join(orchestrator.scan_types)}")
logger.info(f"总共 {len(orchestrator.scan_types)} 个 Flow")
# ==================== 初始化阶段进度 ====================
# 在解析完配置后立即初始化,此时已有完整的 scan_types 列表
from apps.scan.services import ScanService
scan_service = ScanService()
scan_service.init_stage_progress(scan_id, orchestrator.scan_types)
logger.info(f"✓ 初始化阶段进度 - Stages: {orchestrator.scan_types}")
# ==================== 更新 Target 最后扫描时间 ====================
# 在开始扫描时更新,表示"最后一次扫描开始时间"
from apps.targets.services import TargetService
target_service = TargetService()
target_service.update_last_scanned_at(target_id)
logger.info(f"✓ 更新 Target 最后扫描时间 - Target ID: {target_id}")
# ==================== Task 3: 执行 Flow动态阶段执行====================
# 注意:各阶段状态更新由 scan_flow_handlers.py 自动处理running/completed/failed
executed_flows = []
results = {}
# 通用执行参数
flow_kwargs = {
'scan_id': scan_id,
'target_name': target_name,
'target_id': target_id,
'scan_workspace_dir': str(scan_workspace_path)
}
def record_flow_result(scan_type, result=None, error=None):
"""
统一的结果记录函数
Args:
scan_type: 扫描类型名称
result: 执行结果(成功时)
error: 异常对象(失败时)
"""
if error:
# 失败处理:记录错误但不抛出异常,让扫描继续执行后续阶段
error_msg = f"{scan_type} 执行失败: {str(error)}"
logger.warning(error_msg)
executed_flows.append(f"{scan_type} (失败)")
results[scan_type] = {'success': False, 'error': str(error)}
# 不再抛出异常,让扫描继续
else:
# 成功处理
executed_flows.append(scan_type)
results[scan_type] = result
logger.info(f"{scan_type} 执行成功")
def get_valid_flows(flow_names):
"""
获取有效的 Flow 函数列表,并为每个 Flow 准备专属参数
Args:
flow_names: 扫描类型名称列表
Returns:
list: [(scan_type, flow_func, flow_specific_kwargs), ...] 有效的函数列表
"""
valid_flows = []
for scan_type in flow_names:
flow_func = orchestrator.get_flow_function(scan_type)
if flow_func:
# 为每个 Flow 准备专属的参数(包含对应的 enabled_tools
flow_specific_kwargs = dict(flow_kwargs)
flow_specific_kwargs['enabled_tools'] = enabled_tools_by_type.get(scan_type, {})
valid_flows.append((scan_type, flow_func, flow_specific_kwargs))
else:
logger.warning(f"跳过未实现的 Flow: {scan_type}")
return valid_flows
# ---------------------------------------------------------
# 动态阶段执行(基于 FlowOrchestrator 定义)
# ---------------------------------------------------------
for mode, enabled_flows in orchestrator.get_execution_stages():
if mode == 'sequential':
# 顺序执行
logger.info("="*60)
logger.info(f"顺序执行阶段: {', '.join(enabled_flows)}")
logger.info("="*60)
for scan_type, flow_func, flow_specific_kwargs in get_valid_flows(enabled_flows):
logger.info("="*60)
logger.info(f"执行 Flow: {scan_type}")
logger.info("="*60)
try:
result = flow_func(**flow_specific_kwargs)
record_flow_result(scan_type, result=result)
except Exception as e:
record_flow_result(scan_type, error=e)
elif mode == 'parallel':
# 并行执行阶段:通过 Task 包装子 Flow并使用 Prefect TaskRunner 并发运行
logger.info("="*60)
logger.info(f"并行执行阶段: {', '.join(enabled_flows)}")
logger.info("="*60)
futures = []
# 提交所有并行子 Flow 任务
for scan_type, flow_func, flow_specific_kwargs in get_valid_flows(enabled_flows):
logger.info("="*60)
logger.info(f"提交并行子 Flow 任务: {scan_type}")
logger.info("="*60)
future = _run_subflow_task.submit(
scan_type=scan_type,
flow_func=flow_func,
flow_kwargs=flow_specific_kwargs,
)
futures.append((scan_type, future))
# 等待所有并行子 Flow 完成
if futures:
wait([f for _, f in futures])
# 检查结果(复用统一的结果处理逻辑)
for scan_type, future in futures:
try:
result = future.result()
record_flow_result(scan_type, result=result)
except Exception as e:
record_flow_result(scan_type, error=e)
# ==================== 完成 ====================
logger.info("="*60)
logger.info("✓ 扫描任务初始化完成")
logger.info(f"执行的 Flow: {', '.join(executed_flows)}")
logger.info("="*60)
# ==================== 返回结果 ====================
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': str(scan_workspace_path),
'executed_flows': executed_flows,
'results': results
}
except ValueError as e:
# 参数错误
logger.error("参数错误: %s", e)
raise
except RuntimeError as e:
# 执行失败
logger.error("运行时错误: %s", e)
raise
except OSError as e:
# 文件系统错误(工作空间创建失败)
logger.error("文件系统错误: %s", e)
raise
except Exception as e:
# 其他未预期错误
logger.exception("初始化扫描任务失败: %s", e)
# 注意:失败状态更新由 Prefect State Handlers 自动处理
raise

View File

@@ -1,251 +0,0 @@
from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
from datetime import datetime
from pathlib import Path
from typing import Dict
from prefect import flow
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_nuclei_templates_local, user_log
from apps.scan.tasks.vuln_scan import (
export_endpoints_task,
run_vuln_tool_task,
run_and_stream_save_dalfox_vulns_task,
run_and_stream_save_nuclei_vulns_task,
)
from .utils import calculate_timeout_by_line_count
logger = logging.getLogger(__name__)
@flow(
name="endpoints_vuln_scan_flow",
log_prints=True,
)
def endpoints_vuln_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: Dict[str, dict],
) -> dict:
"""基于 Endpoint 的漏洞扫描 Flow串行执行 Dalfox 等工具)。"""
try:
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
from apps.scan.utils import setup_scan_directory
vuln_scan_dir = setup_scan_directory(scan_workspace_dir, 'vuln_scan')
endpoints_file = vuln_scan_dir / "input_endpoints.txt"
# Step 1: 导出 Endpoint URL
export_result = export_endpoints_task(
target_id=target_id,
output_file=str(endpoints_file),
)
total_endpoints = export_result.get("total_count", 0)
if total_endpoints == 0 or not endpoints_file.exists() or endpoints_file.stat().st_size == 0:
logger.warning("目标下没有可用 Endpoint跳过漏洞扫描")
return {
"success": True,
"scan_id": scan_id,
"target": target_name,
"scan_workspace_dir": scan_workspace_dir,
"endpoints_file": str(endpoints_file),
"endpoint_count": 0,
"executed_tools": [],
"tool_results": {},
}
logger.info("Endpoint 导出完成,共 %d 条,开始执行漏洞扫描", total_endpoints)
tool_results: Dict[str, dict] = {}
# Step 2: 并行执行每个漏洞扫描工具(目前主要是 Dalfox
# 1先为每个工具 submit Prefect Task让 Worker 并行调度
# 2再统一收集各自的结果组装成 tool_results
tool_futures: Dict[str, dict] = {}
for tool_name, tool_config in enabled_tools.items():
# Nuclei 需要先确保本地模板存在(支持多个模板仓库)
template_args = ""
if tool_name == "nuclei":
repo_names = tool_config.get("template_repo_names")
if not repo_names or not isinstance(repo_names, (list, tuple)):
logger.error("Nuclei 配置缺少 template_repo_names数组跳过")
continue
template_paths = []
try:
for repo_name in repo_names:
path = ensure_nuclei_templates_local(repo_name)
template_paths.append(path)
logger.info("Nuclei 模板路径 [%s]: %s", repo_name, path)
except Exception as e:
logger.error("获取 Nuclei 模板失败: %s,跳过 nuclei 扫描", e)
continue
template_args = " ".join(f"-t {p}" for p in template_paths)
# 构建命令参数
command_params = {"endpoints_file": str(endpoints_file)}
if template_args:
command_params["template_args"] = template_args
command = build_scan_command(
tool_name=tool_name,
scan_type="vuln_scan",
command_params=command_params,
tool_config=tool_config,
)
raw_timeout = tool_config.get("timeout", 600)
if isinstance(raw_timeout, str) and raw_timeout == "auto":
# timeout=auto 时,根据 endpoints_file 行数自动计算超时时间
# Dalfox: 每行 100 秒Nuclei: 每行 30 秒
base_per_time = 30 if tool_name == "nuclei" else 100
timeout = calculate_timeout_by_line_count(
tool_config=tool_config,
file_path=str(endpoints_file),
base_per_time=base_per_time,
)
else:
try:
timeout = int(raw_timeout)
except (TypeError, ValueError) as e:
raise ValueError(
f"工具 {tool_name} 的 timeout 配置无效: {raw_timeout!r}"
) from e
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
log_file = vuln_scan_dir / f"{tool_name}_{timestamp}.log"
# Dalfox XSS 使用流式任务,一边解析一边保存漏洞结果
if tool_name == "dalfox_xss":
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_and_stream_save_dalfox_vulns_task.submit(
cmd=command,
tool_name=tool_name,
scan_id=scan_id,
target_id=target_id,
cwd=str(vuln_scan_dir),
shell=True,
batch_size=1,
timeout=timeout,
log_file=str(log_file),
)
tool_futures[tool_name] = {
"future": future,
"command": command,
"timeout": timeout,
"log_file": str(log_file),
"mode": "streaming",
}
elif tool_name == "nuclei":
# Nuclei 使用流式任务
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_and_stream_save_nuclei_vulns_task.submit(
cmd=command,
tool_name=tool_name,
scan_id=scan_id,
target_id=target_id,
cwd=str(vuln_scan_dir),
shell=True,
batch_size=1,
timeout=timeout,
log_file=str(log_file),
)
tool_futures[tool_name] = {
"future": future,
"command": command,
"timeout": timeout,
"log_file": str(log_file),
"mode": "streaming",
}
else:
# 其他工具仍使用非流式执行逻辑
logger.info("开始执行漏洞扫描工具 %s(已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_vuln_tool_task.submit(
tool_name=tool_name,
command=command,
timeout=timeout,
log_file=str(log_file),
)
tool_futures[tool_name] = {
"future": future,
"command": command,
"timeout": timeout,
"log_file": str(log_file),
"mode": "normal",
}
# 统一收集所有工具的执行结果
for tool_name, meta in tool_futures.items():
future = meta["future"]
try:
result = future.result()
if meta["mode"] == "streaming":
created_vulns = result.get("created_vulns", 0)
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"processed_records": result.get("processed_records"),
"created_vulns": created_vulns,
"command_log_file": meta["log_file"],
}
logger.info("✓ 工具 %s 执行完成 - 漏洞: %d", tool_name, created_vulns)
user_log(scan_id, "vuln_scan", f"{tool_name} completed: found {created_vulns} vulnerabilities")
else:
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"duration": result.get("duration"),
"returncode": result.get("returncode"),
"command_log_file": result.get("command_log_file"),
}
logger.info("✓ 工具 %s 执行完成 - returncode=%s", tool_name, result.get("returncode"))
user_log(scan_id, "vuln_scan", f"{tool_name} completed")
except Exception as e:
reason = str(e)
logger.error("工具 %s 执行失败: %s", tool_name, e, exc_info=True)
user_log(scan_id, "vuln_scan", f"{tool_name} failed: {reason}", "error")
return {
"success": True,
"scan_id": scan_id,
"target": target_name,
"scan_workspace_dir": scan_workspace_dir,
"endpoints_file": str(endpoints_file),
"endpoint_count": total_endpoints,
"executed_tools": list(enabled_tools.keys()),
"tool_results": tool_results,
}
except Exception as e:
logger.exception("Endpoint 漏洞扫描失败: %s", e)
raise

View File

@@ -1,123 +0,0 @@
"""
漏洞扫描主 Flow
"""
import logging
from typing import Dict, Tuple
from prefect import flow
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.configs.command_templates import get_command_template
from apps.scan.utils import user_log, wait_for_system_load
from .endpoints_vuln_scan_flow import endpoints_vuln_scan_flow
logger = logging.getLogger(__name__)
def _classify_vuln_tools(enabled_tools: Dict[str, dict]) -> Tuple[Dict[str, dict], Dict[str, dict]]:
"""根据命令模板中的 input_type 对漏洞扫描工具进行分类。
当前支持:
- endpoints_file: 以端点列表文件为输入(例如 Dalfox XSS
预留:
- 其他 input_type 将被归类到 other_tools暂不处理。
"""
endpoints_tools: Dict[str, dict] = {}
other_tools: Dict[str, dict] = {}
for tool_name, tool_config in enabled_tools.items():
template = get_command_template("vuln_scan", tool_name) or {}
input_type = template.get("input_type", "endpoints_file")
if input_type == "endpoints_file":
endpoints_tools[tool_name] = tool_config
else:
other_tools[tool_name] = tool_config
return endpoints_tools, other_tools
@flow(
name="vuln_scan",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def vuln_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: Dict[str, dict],
) -> dict:
"""漏洞扫描主 Flow串行编排各类漏洞扫描子 Flow。
支持工具:
- dalfox_xss: XSS 漏洞扫描(流式保存)
- nuclei: 通用漏洞扫描(流式保存,支持模板 commit hash 同步)
"""
try:
# 负载检查:等待系统资源充足
wait_for_system_load(context="vuln_scan_flow")
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
logger.info("开始漏洞扫描 - Scan ID: %s, Target: %s", scan_id, target_name)
user_log(scan_id, "vuln_scan", "Starting vulnerability scan")
# Step 1: 分类工具
endpoints_tools, other_tools = _classify_vuln_tools(enabled_tools)
logger.info(
"漏洞扫描工具分类 - endpoints_file: %s, 其他: %s",
list(endpoints_tools.keys()) or "",
list(other_tools.keys()) or "",
)
if other_tools:
logger.warning(
"存在暂不支持输入类型的漏洞扫描工具,将被忽略: %s",
list(other_tools.keys()),
)
if not endpoints_tools:
raise ValueError("漏洞扫描需要至少启用一个以 endpoints_file 为输入的工具(如 dalfox_xss、nuclei")
# Step 2: 执行 Endpoint 漏洞扫描子 Flow串行
endpoint_result = endpoints_vuln_scan_flow(
scan_id=scan_id,
target_name=target_name,
target_id=target_id,
scan_workspace_dir=scan_workspace_dir,
enabled_tools=endpoints_tools,
)
# 记录 Flow 完成
total_vulns = sum(
r.get("created_vulns", 0)
for r in endpoint_result.get("tool_results", {}).values()
)
logger.info("✓ 漏洞扫描完成 - 新增漏洞: %d", total_vulns)
user_log(scan_id, "vuln_scan", f"vuln_scan completed: found {total_vulns} vulnerabilities")
# 目前只有一个子 Flow直接返回其结果
return endpoint_result
except Exception as e:
logger.exception("漏洞扫描主 Flow 失败: %s", e)
raise

View File

@@ -1,189 +0,0 @@
"""
扫描流程处理器
负责处理扫描流程(端口扫描、子域名发现等)的状态变化和通知
职责:
- 更新各阶段的进度状态running/completed/failed
- 发送扫描阶段的通知
- 记录 Flow 性能指标
"""
import logging
from prefect import Flow
from prefect.client.schemas import FlowRun, State
from apps.scan.utils.performance import FlowPerformanceTracker
from apps.scan.utils import user_log
logger = logging.getLogger(__name__)
# 存储每个 flow_run 的性能追踪器
_flow_trackers: dict[str, FlowPerformanceTracker] = {}
def _get_stage_from_flow_name(flow_name: str) -> str | None:
"""
从 Flow name 获取对应的 stage
Flow name 直接作为 stage与 engine_config 的 key 一致)
排除主 Flowinitiate_scan
"""
# 排除主 Flow它不是阶段 Flow
if flow_name == 'initiate_scan':
return None
return flow_name
def on_scan_flow_running(flow: Flow, flow_run: FlowRun, state: State) -> None:
"""
扫描流程开始运行时的回调
职责:
- 更新阶段进度为 running
- 发送扫描开始通知
- 启动性能追踪
Args:
flow: Prefect Flow 对象
flow_run: Flow 运行实例
state: Flow 当前状态
"""
logger.info("🚀 扫描流程开始运行 - Flow: %s, Run ID: %s", flow.name, flow_run.id)
# 提取流程参数
flow_params = flow_run.parameters or {}
scan_id = flow_params.get('scan_id')
target_name = flow_params.get('target_name', 'unknown')
target_id = flow_params.get('target_id')
# 启动性能追踪
if scan_id:
tracker = FlowPerformanceTracker(flow.name, scan_id)
tracker.start(target_id=target_id, target_name=target_name)
_flow_trackers[str(flow_run.id)] = tracker
# 更新阶段进度
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
try:
from apps.scan.services import ScanService
service = ScanService()
service.start_stage(scan_id, stage)
logger.info(f"✓ 阶段进度已更新为 running - Scan ID: {scan_id}, Stage: {stage}")
except Exception as e:
logger.error(f"更新阶段进度失败 - Scan ID: {scan_id}, Stage: {stage}: {e}")
def on_scan_flow_completed(flow: Flow, flow_run: FlowRun, state: State) -> None:
"""
扫描流程完成时的回调
职责:
- 更新阶段进度为 completed
- 发送扫描完成通知(可选)
- 记录性能指标
Args:
flow: Prefect Flow 对象
flow_run: Flow 运行实例
state: Flow 当前状态
"""
logger.info("✅ 扫描流程完成 - Flow: %s, Run ID: %s", flow.name, flow_run.id)
# 提取流程参数
flow_params = flow_run.parameters or {}
scan_id = flow_params.get('scan_id')
# 获取 flow result
result = None
try:
result = state.result() if state.result else None
except Exception:
pass
# 记录性能指标
tracker = _flow_trackers.pop(str(flow_run.id), None)
if tracker:
tracker.finish(success=True)
# 更新阶段进度
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
try:
from apps.scan.services import ScanService
service = ScanService()
# 从 flow result 中提取 detail如果有
detail = None
if isinstance(result, dict):
detail = result.get('detail')
service.complete_stage(scan_id, stage, detail)
logger.info(f"✓ 阶段进度已更新为 completed - Scan ID: {scan_id}, Stage: {stage}")
# 每个阶段完成后刷新缓存统计,便于前端实时看到增量
try:
service.update_cached_stats(scan_id)
logger.info("✓ 阶段完成后已刷新缓存统计 - Scan ID: %s", scan_id)
except Exception as e:
logger.error("阶段完成后刷新缓存统计失败 - Scan ID: %s, 错误: %s", scan_id, e)
except Exception as e:
logger.error(f"更新阶段进度失败 - Scan ID: {scan_id}, Stage: {stage}: {e}")
def on_scan_flow_failed(flow: Flow, flow_run: FlowRun, state: State) -> None:
"""
扫描流程失败时的回调
职责:
- 更新阶段进度为 failed
- 发送扫描失败通知
- 记录性能指标(含错误信息)
- 写入 ScanLog 供前端显示
Args:
flow: Prefect Flow 对象
flow_run: Flow 运行实例
state: Flow 当前状态
"""
logger.info("❌ 扫描流程失败 - Flow: %s, Run ID: %s", flow.name, flow_run.id)
# 提取流程参数
flow_params = flow_run.parameters or {}
scan_id = flow_params.get('scan_id')
target_name = flow_params.get('target_name', 'unknown')
# 提取错误信息
error_message = str(state.message) if state.message else "未知错误"
# 写入 ScanLog 供前端显示
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
user_log(scan_id, stage, f"Failed: {error_message}", "error")
# 记录性能指标(失败情况)
tracker = _flow_trackers.pop(str(flow_run.id), None)
if tracker:
tracker.finish(success=False, error_message=error_message)
# 更新阶段进度
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
try:
from apps.scan.services import ScanService
service = ScanService()
service.fail_stage(scan_id, stage, error_message)
logger.info(f"✓ 阶段进度已更新为 failed - Scan ID: {scan_id}, Stage: {stage}")
except Exception as e:
logger.error(f"更新阶段进度失败 - Scan ID: {scan_id}, Stage: {stage}: {e}")
# 发送通知
try:
from apps.scan.notifications import create_notification, NotificationLevel
message = f"任务:{flow.name}\n状态:执行失败\n错误:{error_message}"
create_notification(
title=target_name,
message=message,
level=NotificationLevel.HIGH
)
logger.error(f"✓ 扫描失败通知已发送 - Target: {target_name}, Flow: {flow.name}, Error: {error_message}")
except Exception as e:
logger.error(f"发送扫描失败通知失败 - Flow: {flow.name}: {e}")

View File

@@ -1,56 +0,0 @@
"""
扫描目标提供者模块
提供统一的目标获取接口,支持多种数据源:
- DatabaseTargetProvider: 从数据库查询(完整扫描)
- ListTargetProvider: 使用内存列表快速扫描阶段1
- SnapshotTargetProvider: 从快照表读取快速扫描阶段2+
- PipelineTargetProvider: 使用管道输出Phase 2
使用方式:
from apps.scan.providers import (
DatabaseTargetProvider,
ListTargetProvider,
SnapshotTargetProvider,
ProviderContext
)
# 数据库模式(完整扫描)
provider = DatabaseTargetProvider(target_id=123)
# 列表模式快速扫描阶段1
context = ProviderContext(target_id=1, scan_id=100)
provider = ListTargetProvider(
targets=["a.test.com"],
context=context
)
# 快照模式快速扫描阶段2+
context = ProviderContext(target_id=1, scan_id=100)
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain",
context=context
)
# 使用 Provider
for host in provider.iter_hosts():
scan(host)
"""
from .base import TargetProvider, ProviderContext
from .list_provider import ListTargetProvider
from .database_provider import DatabaseTargetProvider
from .snapshot_provider import SnapshotTargetProvider, SnapshotType
from .pipeline_provider import PipelineTargetProvider, StageOutput
__all__ = [
'TargetProvider',
'ProviderContext',
'ListTargetProvider',
'DatabaseTargetProvider',
'SnapshotTargetProvider',
'SnapshotType',
'PipelineTargetProvider',
'StageOutput',
]

View File

@@ -1,115 +0,0 @@
"""
扫描目标提供者基础模块
定义 ProviderContext 数据类和 TargetProvider 抽象基类。
"""
import ipaddress
import logging
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import TYPE_CHECKING, Iterator, Optional
if TYPE_CHECKING:
from apps.common.utils import BlacklistFilter
logger = logging.getLogger(__name__)
@dataclass
class ProviderContext:
"""
Provider 上下文,携带元数据
Attributes:
target_id: 关联的 Target ID用于结果保存None 表示临时扫描(不保存)
scan_id: 扫描任务 ID
"""
target_id: Optional[int] = None
scan_id: Optional[int] = None
class TargetProvider(ABC):
"""
扫描目标提供者抽象基类
职责:
- 提供扫描目标域名、IP、URL 等)的迭代器
- 提供黑名单过滤器
- 携带上下文信息target_id, scan_id 等)
- 自动展开 CIDR子类无需关心
使用方式:
provider = create_target_provider(target_id=123)
for host in provider.iter_hosts():
print(host)
"""
def __init__(self, context: Optional[ProviderContext] = None):
self._context = context or ProviderContext()
@property
def context(self) -> ProviderContext:
"""返回 Provider 上下文"""
return self._context
@staticmethod
def _expand_host(host: str) -> Iterator[str]:
"""
展开主机(如果是 CIDR 则展开为多个 IP否则直接返回
示例:
"192.168.1.0/30""192.168.1.1", "192.168.1.2"
"192.168.1.1""192.168.1.1"
"example.com""example.com"
"""
from apps.common.validators import detect_target_type
from apps.targets.models import Target
host = host.strip()
if not host:
return
try:
target_type = detect_target_type(host)
if target_type == Target.TargetType.CIDR:
network = ipaddress.ip_network(host, strict=False)
if network.num_addresses == 1:
yield str(network.network_address)
else:
yield from (str(ip) for ip in network.hosts())
elif target_type in (Target.TargetType.IP, Target.TargetType.DOMAIN):
yield host
except ValueError as e:
logger.warning("跳过无效的主机格式 '%s': %s", host, str(e))
def iter_hosts(self) -> Iterator[str]:
"""迭代主机列表(域名/IP自动展开 CIDR"""
for host in self._iter_raw_hosts():
yield from self._expand_host(host)
@abstractmethod
def _iter_raw_hosts(self) -> Iterator[str]:
"""迭代原始主机列表(可能包含 CIDR子类实现"""
pass
@abstractmethod
def iter_urls(self) -> Iterator[str]:
"""迭代 URL 列表"""
pass
@abstractmethod
def get_blacklist_filter(self) -> Optional['BlacklistFilter']:
"""获取黑名单过滤器,返回 None 表示不过滤"""
pass
@property
def target_id(self) -> Optional[int]:
"""返回关联的 target_id临时扫描返回 None"""
return self._context.target_id
@property
def scan_id(self) -> Optional[int]:
"""返回关联的 scan_id"""
return self._context.scan_id

View File

@@ -1,93 +0,0 @@
"""
数据库目标提供者模块
提供基于数据库查询的目标提供者实现。
"""
import logging
from typing import TYPE_CHECKING, Iterator, Optional
from .base import ProviderContext, TargetProvider
if TYPE_CHECKING:
from apps.common.utils import BlacklistFilter
logger = logging.getLogger(__name__)
class DatabaseTargetProvider(TargetProvider):
"""
数据库目标提供者 - 从 Target 表及关联资产表查询
数据来源:
- iter_hosts(): 根据 Target 类型返回域名/IP
- iter_urls(): WebSite/Endpoint 表,带回退链
使用方式:
provider = DatabaseTargetProvider(target_id=123)
for host in provider.iter_hosts():
scan(host)
"""
def __init__(self, target_id: int, context: Optional[ProviderContext] = None):
ctx = context or ProviderContext()
ctx.target_id = target_id
super().__init__(ctx)
self._blacklist_filter: Optional['BlacklistFilter'] = None
def iter_hosts(self) -> Iterator[str]:
"""从数据库查询主机列表,自动展开 CIDR 并应用黑名单过滤"""
blacklist = self.get_blacklist_filter()
for host in self._iter_raw_hosts():
for expanded_host in self._expand_host(host):
if not blacklist or blacklist.is_allowed(expanded_host):
yield expanded_host
def _iter_raw_hosts(self) -> Iterator[str]:
"""从数据库查询原始主机列表(可能包含 CIDR"""
from apps.asset.services.asset.subdomain_service import SubdomainService
from apps.targets.models import Target
from apps.targets.services import TargetService
target = TargetService().get_target(self.target_id)
if not target:
logger.warning("Target ID %d 不存在", self.target_id)
return
if target.type == Target.TargetType.DOMAIN:
yield target.name
for domain in SubdomainService().iter_subdomain_names_by_target(
target_id=self.target_id,
chunk_size=1000
):
if domain != target.name:
yield domain
elif target.type in (Target.TargetType.IP, Target.TargetType.CIDR):
yield target.name
def iter_urls(self) -> Iterator[str]:
"""从数据库查询 URL 列表使用回退链Endpoint → WebSite → Default"""
from apps.scan.services.target_export_service import (
DataSource,
_iter_urls_with_fallback,
)
blacklist = self.get_blacklist_filter()
for url, _ in _iter_urls_with_fallback(
target_id=self.target_id,
sources=[DataSource.ENDPOINT, DataSource.WEBSITE, DataSource.DEFAULT],
blacklist_filter=blacklist
):
yield url
def get_blacklist_filter(self) -> Optional['BlacklistFilter']:
"""获取黑名单过滤器(延迟加载)"""
if self._blacklist_filter is None:
from apps.common.services import BlacklistService
from apps.common.utils import BlacklistFilter
rules = BlacklistService().get_rules(self.target_id)
self._blacklist_filter = BlacklistFilter(rules)
return self._blacklist_filter

View File

@@ -1,84 +0,0 @@
"""
列表目标提供者模块
提供基于内存列表的目标提供者实现。
"""
from typing import Iterator, Optional, List
from .base import TargetProvider, ProviderContext
class ListTargetProvider(TargetProvider):
"""
列表目标提供者 - 直接使用内存中的列表
用于快速扫描、临时扫描等场景,只扫描用户指定的目标。
特点:
- 不查询数据库
- 不应用黑名单过滤(用户明确指定的目标)
- 不关联 target_id由调用方负责创建 Target
- 自动检测输入类型URL/域名/IP/CIDR
- 自动展开 CIDR
使用方式:
# 快速扫描:用户提供目标,自动识别类型
provider = ListTargetProvider(targets=[
"example.com", # 域名
"192.168.1.0/24", # CIDR自动展开
"https://api.example.com" # URL
])
for host in provider.iter_hosts():
scan(host)
"""
def __init__(
self,
targets: Optional[List[str]] = None,
context: Optional[ProviderContext] = None
):
"""
初始化列表目标提供者
Args:
targets: 目标列表自动识别类型URL/域名/IP/CIDR
context: Provider 上下文
"""
from apps.common.validators import detect_input_type
ctx = context or ProviderContext()
super().__init__(ctx)
# 自动分类目标
self._hosts = []
self._urls = []
if targets:
for target in targets:
target = target.strip()
if not target:
continue
try:
input_type = detect_input_type(target)
if input_type == 'url':
self._urls.append(target)
else:
# domain/ip/cidr 都作为 host
self._hosts.append(target)
except ValueError:
# 无法识别类型,默认作为 host
self._hosts.append(target)
def _iter_raw_hosts(self) -> Iterator[str]:
"""迭代原始主机列表(可能包含 CIDR"""
yield from self._hosts
def iter_urls(self) -> Iterator[str]:
"""迭代 URL 列表"""
yield from self._urls
def get_blacklist_filter(self) -> None:
"""列表模式不使用黑名单过滤"""
return None

View File

@@ -1,91 +0,0 @@
"""
管道目标提供者模块
提供基于管道阶段输出的目标提供者实现。
用于 Phase 2 管道模式的阶段间数据传递。
"""
from dataclasses import dataclass, field
from typing import Iterator, Optional, List, Dict, Any
from .base import TargetProvider, ProviderContext
@dataclass
class StageOutput:
"""
阶段输出数据
用于在管道阶段之间传递数据。
Attributes:
hosts: 主机列表(域名/IP
urls: URL 列表
new_targets: 新发现的目标列表
stats: 统计信息
success: 是否成功
error: 错误信息
"""
hosts: List[str] = field(default_factory=list)
urls: List[str] = field(default_factory=list)
new_targets: List[str] = field(default_factory=list)
stats: Dict[str, Any] = field(default_factory=dict)
success: bool = True
error: Optional[str] = None
class PipelineTargetProvider(TargetProvider):
"""
管道目标提供者 - 使用上一阶段的输出
用于 Phase 2 管道模式的阶段间数据传递。
特点:
- 不查询数据库
- 不应用黑名单过滤(数据已在上一阶段过滤)
- 直接使用 StageOutput 中的数据
使用方式Phase 2
stage1_output = stage1.run(input)
provider = PipelineTargetProvider(
previous_output=stage1_output,
target_id=123
)
for host in provider.iter_hosts():
stage2.scan(host)
"""
def __init__(
self,
previous_output: StageOutput,
target_id: Optional[int] = None,
context: Optional[ProviderContext] = None
):
"""
初始化管道目标提供者
Args:
previous_output: 上一阶段的输出
target_id: 可选,关联到某个 Target用于保存结果
context: Provider 上下文
"""
ctx = context or ProviderContext(target_id=target_id)
super().__init__(ctx)
self._previous_output = previous_output
def _iter_raw_hosts(self) -> Iterator[str]:
"""迭代上一阶段输出的原始主机(可能包含 CIDR"""
yield from self._previous_output.hosts
def iter_urls(self) -> Iterator[str]:
"""迭代上一阶段输出的 URL"""
yield from self._previous_output.urls
def get_blacklist_filter(self) -> None:
"""管道传递的数据已经过滤过了"""
return None
@property
def previous_output(self) -> StageOutput:
"""返回上一阶段的输出"""
return self._previous_output

View File

@@ -1,175 +0,0 @@
"""
快照目标提供者模块
提供基于快照表的目标提供者实现。
用于快速扫描的阶段间数据传递。
"""
import logging
from typing import Iterator, Optional, Literal
from .base import TargetProvider, ProviderContext
logger = logging.getLogger(__name__)
# 快照类型定义
SnapshotType = Literal["subdomain", "website", "endpoint", "host_port"]
class SnapshotTargetProvider(TargetProvider):
"""
快照目标提供者 - 从快照表读取本次扫描的数据
用于快速扫描的阶段间数据传递,解决精确扫描控制问题。
核心价值:
- 只返回本次扫描scan_id发现的资产
- 避免扫描历史数据DatabaseTargetProvider 会扫描所有历史资产)
特点:
- 通过 scan_id 过滤快照表
- 不应用黑名单过滤(数据已在上一阶段过滤)
- 支持多种快照类型subdomain/website/endpoint/host_port
使用场景:
# 快速扫描流程
用户输入: a.test.com
创建 Target: test.com (id=1)
创建 Scan: scan_id=100
# 阶段1: 子域名发现
provider = ListTargetProvider(
targets=["a.test.com"],
context=ProviderContext(target_id=1, scan_id=100)
)
# 发现: b.a.test.com, c.a.test.com
# 保存: SubdomainSnapshot(scan_id=100) + Subdomain(target_id=1)
# 阶段2: 端口扫描
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain",
context=ProviderContext(target_id=1, scan_id=100)
)
# 只返回: b.a.test.com, c.a.test.com本次扫描发现的
# 不返回: www.test.com, api.test.com历史数据
# 阶段3: 网站扫描
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="host_port",
context=ProviderContext(target_id=1, scan_id=100)
)
# 只返回本次扫描发现的 IP:Port
"""
def __init__(
self,
scan_id: int,
snapshot_type: SnapshotType,
context: Optional[ProviderContext] = None
):
"""
初始化快照目标提供者
Args:
scan_id: 扫描任务 ID必需
snapshot_type: 快照类型
- "subdomain": 子域名快照SubdomainSnapshot
- "website": 网站快照WebsiteSnapshot
- "endpoint": 端点快照EndpointSnapshot
- "host_port": 主机端口映射快照HostPortMappingSnapshot
context: Provider 上下文
"""
ctx = context or ProviderContext()
ctx.scan_id = scan_id
super().__init__(ctx)
self._scan_id = scan_id
self._snapshot_type = snapshot_type
def _iter_raw_hosts(self) -> Iterator[str]:
"""
从快照表迭代主机列表
根据 snapshot_type 选择不同的快照表:
- subdomain: SubdomainSnapshot.name
- host_port: HostPortMappingSnapshot.host (返回 host:port 格式,不经过验证)
"""
if self._snapshot_type == "subdomain":
from apps.asset.services.snapshot import SubdomainSnapshotsService
service = SubdomainSnapshotsService()
yield from service.iter_subdomain_names_by_scan(
scan_id=self._scan_id,
chunk_size=1000
)
elif self._snapshot_type == "host_port":
# host_port 类型不使用 _iter_raw_hosts直接在 iter_hosts 中处理
# 这里返回空,避免被基类的 iter_hosts 调用
return
else:
# 其他类型暂不支持 iter_hosts
logger.warning(
"快照类型 '%s' 不支持 iter_hosts返回空迭代器",
self._snapshot_type
)
return
def iter_hosts(self) -> Iterator[str]:
"""
迭代主机列表
对于 host_port 类型,返回 host:port 格式,不经过 CIDR 展开验证
"""
if self._snapshot_type == "host_port":
# host_port 类型直接返回 host:port不经过 _expand_host 验证
from apps.asset.services.snapshot import HostPortMappingSnapshotsService
service = HostPortMappingSnapshotsService()
queryset = service.get_by_scan(scan_id=self._scan_id)
for mapping in queryset.iterator(chunk_size=1000):
yield f"{mapping.host}:{mapping.port}"
else:
# 其他类型使用基类的 iter_hosts会调用 _iter_raw_hosts 并展开 CIDR
yield from super().iter_hosts()
def iter_urls(self) -> Iterator[str]:
"""
从快照表迭代 URL 列表
根据 snapshot_type 选择不同的快照表:
- website: WebsiteSnapshot.url
- endpoint: EndpointSnapshot.url
"""
if self._snapshot_type == "website":
from apps.asset.services.snapshot import WebsiteSnapshotsService
service = WebsiteSnapshotsService()
yield from service.iter_website_urls_by_scan(
scan_id=self._scan_id,
chunk_size=1000
)
elif self._snapshot_type == "endpoint":
from apps.asset.services.snapshot import EndpointSnapshotsService
service = EndpointSnapshotsService()
# 从快照表获取端点 URL
queryset = service.get_by_scan(scan_id=self._scan_id)
for endpoint in queryset.iterator(chunk_size=1000):
yield endpoint.url
else:
# 其他类型暂不支持 iter_urls
logger.warning(
"快照类型 '%s' 不支持 iter_urls返回空迭代器",
self._snapshot_type
)
return
def get_blacklist_filter(self) -> None:
"""快照数据已在上一阶段过滤过了"""
return None
@property
def snapshot_type(self) -> SnapshotType:
"""返回快照类型"""
return self._snapshot_type

View File

@@ -1,256 +0,0 @@
"""
通用属性测试
包含跨多个 Provider 的通用属性测试:
- Property 4: Context Propagation
- Property 5: Non-Database Provider Blacklist Filter
- Property 7: CIDR Expansion Consistency
"""
import pytest
from hypothesis import given, strategies as st, settings
from ipaddress import IPv4Network
from apps.scan.providers import (
ProviderContext,
ListTargetProvider,
DatabaseTargetProvider,
PipelineTargetProvider,
SnapshotTargetProvider
)
from apps.scan.providers.pipeline_provider import StageOutput
class TestContextPropagation:
"""
Property 4: Context Propagation
*For any* ProviderContext传入 Provider 构造函数后,
Provider 的 target_id 和 scan_id 属性应该与 context 中的值一致。
**Validates: Requirements 1.3, 1.5, 7.4, 7.5**
"""
@given(
target_id=st.integers(min_value=1, max_value=10000),
scan_id=st.integers(min_value=1, max_value=10000)
)
@settings(max_examples=100)
def test_property_4_list_provider_context_propagation(self, target_id, scan_id):
"""
Property 4: Context Propagation (ListTargetProvider)
Feature: scan-target-provider, Property 4: Context Propagation
**Validates: Requirements 1.3, 1.5, 7.4, 7.5**
"""
ctx = ProviderContext(target_id=target_id, scan_id=scan_id)
provider = ListTargetProvider(targets=["example.com"], context=ctx)
assert provider.target_id == target_id
assert provider.scan_id == scan_id
assert provider.context.target_id == target_id
assert provider.context.scan_id == scan_id
@given(
target_id=st.integers(min_value=1, max_value=10000),
scan_id=st.integers(min_value=1, max_value=10000)
)
@settings(max_examples=100)
def test_property_4_database_provider_context_propagation(self, target_id, scan_id):
"""
Property 4: Context Propagation (DatabaseTargetProvider)
Feature: scan-target-provider, Property 4: Context Propagation
**Validates: Requirements 1.3, 1.5, 7.4, 7.5**
"""
ctx = ProviderContext(target_id=999, scan_id=scan_id)
# DatabaseTargetProvider 会覆盖 context 中的 target_id
provider = DatabaseTargetProvider(target_id=target_id, context=ctx)
assert provider.target_id == target_id # 使用构造函数参数
assert provider.scan_id == scan_id # 使用 context 中的值
assert provider.context.target_id == target_id
assert provider.context.scan_id == scan_id
@given(
target_id=st.integers(min_value=1, max_value=10000),
scan_id=st.integers(min_value=1, max_value=10000)
)
@settings(max_examples=100)
def test_property_4_pipeline_provider_context_propagation(self, target_id, scan_id):
"""
Property 4: Context Propagation (PipelineTargetProvider)
Feature: scan-target-provider, Property 4: Context Propagation
**Validates: Requirements 1.3, 1.5, 7.4, 7.5**
"""
ctx = ProviderContext(target_id=target_id, scan_id=scan_id)
stage_output = StageOutput(hosts=["example.com"])
provider = PipelineTargetProvider(previous_output=stage_output, context=ctx)
assert provider.target_id == target_id
assert provider.scan_id == scan_id
assert provider.context.target_id == target_id
assert provider.context.scan_id == scan_id
@given(
target_id=st.integers(min_value=1, max_value=10000),
scan_id=st.integers(min_value=1, max_value=10000)
)
@settings(max_examples=100)
def test_property_4_snapshot_provider_context_propagation(self, target_id, scan_id):
"""
Property 4: Context Propagation (SnapshotTargetProvider)
Feature: scan-target-provider, Property 4: Context Propagation
**Validates: Requirements 1.3, 1.5, 7.4, 7.5**
"""
ctx = ProviderContext(target_id=target_id, scan_id=999)
# SnapshotTargetProvider 会覆盖 context 中的 scan_id
provider = SnapshotTargetProvider(
scan_id=scan_id,
snapshot_type="subdomain",
context=ctx
)
assert provider.target_id == target_id # 使用 context 中的值
assert provider.scan_id == scan_id # 使用构造函数参数
assert provider.context.target_id == target_id
assert provider.context.scan_id == scan_id
class TestNonDatabaseProviderBlacklistFilter:
"""
Property 5: Non-Database Provider Blacklist Filter
*For any* ListTargetProvider 或 PipelineTargetProvider 实例,
get_blacklist_filter() 方法应该返回 None。
**Validates: Requirements 3.4, 9.4, 9.5**
"""
@given(targets=st.lists(st.text(min_size=1, max_size=20), max_size=10))
@settings(max_examples=100)
def test_property_5_list_provider_no_blacklist(self, targets):
"""
Property 5: Non-Database Provider Blacklist Filter (ListTargetProvider)
Feature: scan-target-provider, Property 5: Non-Database Provider Blacklist Filter
**Validates: Requirements 3.4, 9.4, 9.5**
"""
provider = ListTargetProvider(targets=targets)
assert provider.get_blacklist_filter() is None
@given(hosts=st.lists(st.text(min_size=1, max_size=20), max_size=10))
@settings(max_examples=100)
def test_property_5_pipeline_provider_no_blacklist(self, hosts):
"""
Property 5: Non-Database Provider Blacklist Filter (PipelineTargetProvider)
Feature: scan-target-provider, Property 5: Non-Database Provider Blacklist Filter
**Validates: Requirements 3.4, 9.4, 9.5**
"""
stage_output = StageOutput(hosts=hosts)
provider = PipelineTargetProvider(previous_output=stage_output)
assert provider.get_blacklist_filter() is None
def test_property_5_snapshot_provider_no_blacklist(self):
"""
Property 5: Non-Database Provider Blacklist Filter (SnapshotTargetProvider)
Feature: scan-target-provider, Property 5: Non-Database Provider Blacklist Filter
**Validates: Requirements 3.4, 9.4, 9.5**
"""
provider = SnapshotTargetProvider(scan_id=1, snapshot_type="subdomain")
assert provider.get_blacklist_filter() is None
class TestCIDRExpansionConsistency:
"""
Property 7: CIDR Expansion Consistency
*For any* CIDR 字符串(如 "192.168.1.0/24"),所有 Provider 的 iter_hosts()
方法应该将其展开为相同的单个 IP 地址列表。
**Validates: Requirements 1.1, 3.6**
"""
@given(
# 生成小的 CIDR 范围以避免测试超时
network_prefix=st.integers(min_value=1, max_value=254),
cidr_suffix=st.integers(min_value=28, max_value=30) # /28 = 16 IPs, /30 = 4 IPs
)
@settings(max_examples=50, deadline=None)
def test_property_7_cidr_expansion_consistency(self, network_prefix, cidr_suffix):
"""
Property 7: CIDR Expansion Consistency
Feature: scan-target-provider, Property 7: CIDR Expansion Consistency
**Validates: Requirements 1.1, 3.6**
For any CIDR string, all Providers should expand it to the same IP list.
"""
cidr = f"192.168.{network_prefix}.0/{cidr_suffix}"
# 计算预期的 IP 列表
network = IPv4Network(cidr, strict=False)
# 排除网络地址和广播地址
expected_ips = [str(ip) for ip in network.hosts()]
# 如果 CIDR 太小(/31 或 /32使用所有地址
if not expected_ips:
expected_ips = [str(ip) for ip in network]
# ListTargetProvider
list_provider = ListTargetProvider(targets=[cidr])
list_result = list(list_provider.iter_hosts())
# PipelineTargetProvider
stage_output = StageOutput(hosts=[cidr])
pipeline_provider = PipelineTargetProvider(previous_output=stage_output)
pipeline_result = list(pipeline_provider.iter_hosts())
# 验证:所有 Provider 展开的结果应该一致
assert list_result == expected_ips, f"ListProvider CIDR expansion mismatch for {cidr}"
assert pipeline_result == expected_ips, f"PipelineProvider CIDR expansion mismatch for {cidr}"
assert list_result == pipeline_result, f"Providers produce different results for {cidr}"
def test_cidr_expansion_with_multiple_cidrs(self):
"""测试多个 CIDR 的展开一致性"""
cidrs = ["192.168.1.0/30", "10.0.0.0/30"]
# 计算预期结果
expected_ips = []
for cidr in cidrs:
network = IPv4Network(cidr, strict=False)
expected_ips.extend([str(ip) for ip in network.hosts()])
# ListTargetProvider
list_provider = ListTargetProvider(targets=cidrs)
list_result = list(list_provider.iter_hosts())
# PipelineTargetProvider
stage_output = StageOutput(hosts=cidrs)
pipeline_provider = PipelineTargetProvider(previous_output=stage_output)
pipeline_result = list(pipeline_provider.iter_hosts())
# 验证
assert list_result == expected_ips
assert pipeline_result == expected_ips
assert list_result == pipeline_result
def test_mixed_hosts_and_cidrs(self):
"""测试混合主机和 CIDR 的处理"""
targets = ["example.com", "192.168.1.0/30", "test.com"]
# 计算预期结果
network = IPv4Network("192.168.1.0/30", strict=False)
cidr_ips = [str(ip) for ip in network.hosts()]
expected = ["example.com"] + cidr_ips + ["test.com"]
# ListTargetProvider
list_provider = ListTargetProvider(targets=targets)
list_result = list(list_provider.iter_hosts())
# 验证
assert list_result == expected

View File

@@ -1,152 +0,0 @@
"""
ListTargetProvider 属性测试
Property 1: ListTargetProvider Round-Trip
*For any* 主机列表和 URL 列表,创建 ListTargetProvider 后迭代 iter_hosts() 和 iter_urls()
应该返回与输入相同的元素(顺序相同)。
**Validates: Requirements 3.1, 3.2**
"""
import pytest
from hypothesis import given, strategies as st, settings, assume
from apps.scan.providers.list_provider import ListTargetProvider
from apps.scan.providers.base import ProviderContext
# 生成有效域名的策略
def valid_domain_strategy():
"""生成有效的域名"""
# 生成简单的域名格式: subdomain.domain.tld
label = st.text(
alphabet=st.characters(whitelist_categories=('L',), min_codepoint=97, max_codepoint=122),
min_size=2,
max_size=10
)
return st.builds(
lambda a, b, c: f"{a}.{b}.{c}",
label, label, st.sampled_from(['com', 'net', 'org', 'io'])
)
# 生成有效 IP 地址的策略
def valid_ip_strategy():
"""生成有效的 IPv4 地址"""
octet = st.integers(min_value=1, max_value=254)
return st.builds(
lambda a, b, c, d: f"{a}.{b}.{c}.{d}",
octet, octet, octet, octet
)
# 组合策略:域名或 IP
host_strategy = st.one_of(valid_domain_strategy(), valid_ip_strategy())
# 生成有效 URL 的策略
def valid_url_strategy():
"""生成有效的 URL"""
domain = valid_domain_strategy()
return st.builds(
lambda d, path: f"https://{d}/{path}" if path else f"https://{d}",
domain,
st.one_of(
st.just(""),
st.text(
alphabet=st.characters(whitelist_categories=('L',), min_codepoint=97, max_codepoint=122),
min_size=1,
max_size=10
)
)
)
url_strategy = valid_url_strategy()
class TestListTargetProviderProperties:
"""ListTargetProvider 属性测试类"""
@given(hosts=st.lists(host_strategy, max_size=50))
@settings(max_examples=100)
def test_property_1_hosts_round_trip(self, hosts):
"""
Property 1: ListTargetProvider Round-Trip (hosts)
Feature: scan-target-provider, Property 1: ListTargetProvider Round-Trip
**Validates: Requirements 3.1, 3.2**
For any host list, creating a ListTargetProvider and iterating iter_hosts()
should return the same elements in the same order.
"""
# ListTargetProvider 使用 targets 参数,自动分类为 hosts/urls
provider = ListTargetProvider(targets=hosts)
result = list(provider.iter_hosts())
assert result == hosts
@given(urls=st.lists(url_strategy, max_size=50))
@settings(max_examples=100)
def test_property_1_urls_round_trip(self, urls):
"""
Property 1: ListTargetProvider Round-Trip (urls)
Feature: scan-target-provider, Property 1: ListTargetProvider Round-Trip
**Validates: Requirements 3.1, 3.2**
For any URL list, creating a ListTargetProvider and iterating iter_urls()
should return the same elements in the same order.
"""
# ListTargetProvider 使用 targets 参数,自动分类为 hosts/urls
provider = ListTargetProvider(targets=urls)
result = list(provider.iter_urls())
assert result == urls
@given(
hosts=st.lists(host_strategy, max_size=30),
urls=st.lists(url_strategy, max_size=30)
)
@settings(max_examples=100)
def test_property_1_combined_round_trip(self, hosts, urls):
"""
Property 1: ListTargetProvider Round-Trip (combined)
Feature: scan-target-provider, Property 1: ListTargetProvider Round-Trip
**Validates: Requirements 3.1, 3.2**
For any combination of hosts and URLs, both should round-trip correctly.
"""
# 合并 hosts 和 urlsListTargetProvider 会自动分类
combined = hosts + urls
provider = ListTargetProvider(targets=combined)
hosts_result = list(provider.iter_hosts())
urls_result = list(provider.iter_urls())
assert hosts_result == hosts
assert urls_result == urls
class TestListTargetProviderUnit:
"""ListTargetProvider 单元测试类"""
def test_empty_lists(self):
"""测试空列表返回空迭代器 - Requirements 3.5"""
provider = ListTargetProvider()
assert list(provider.iter_hosts()) == []
assert list(provider.iter_urls()) == []
def test_blacklist_filter_returns_none(self):
"""测试黑名单过滤器返回 None - Requirements 3.4"""
provider = ListTargetProvider(targets=["example.com"])
assert provider.get_blacklist_filter() is None
def test_target_id_association(self):
"""测试 target_id 关联 - Requirements 3.3"""
ctx = ProviderContext(target_id=123)
provider = ListTargetProvider(targets=["example.com"], context=ctx)
assert provider.target_id == 123
def test_context_propagation(self):
"""测试上下文传递"""
ctx = ProviderContext(target_id=456, scan_id=789)
provider = ListTargetProvider(targets=["example.com"], context=ctx)
assert provider.target_id == 456
assert provider.scan_id == 789

View File

@@ -1,180 +0,0 @@
"""
PipelineTargetProvider 属性测试
Property 3: PipelineTargetProvider Round-Trip
*For any* StageOutput 对象PipelineTargetProvider 的 iter_hosts() 和 iter_urls()
应该返回与 StageOutput 中 hosts 和 urls 列表相同的元素。
**Validates: Requirements 5.1, 5.2**
"""
import pytest
from hypothesis import given, strategies as st, settings
from apps.scan.providers.pipeline_provider import PipelineTargetProvider, StageOutput
from apps.scan.providers.base import ProviderContext
# 生成有效域名的策略
def valid_domain_strategy():
"""生成有效的域名"""
label = st.text(
alphabet=st.characters(whitelist_categories=('L',), min_codepoint=97, max_codepoint=122),
min_size=2,
max_size=10
)
return st.builds(
lambda a, b, c: f"{a}.{b}.{c}",
label, label, st.sampled_from(['com', 'net', 'org', 'io'])
)
# 生成有效 IP 地址的策略
def valid_ip_strategy():
"""生成有效的 IPv4 地址"""
octet = st.integers(min_value=1, max_value=254)
return st.builds(
lambda a, b, c, d: f"{a}.{b}.{c}.{d}",
octet, octet, octet, octet
)
# 组合策略:域名或 IP
host_strategy = st.one_of(valid_domain_strategy(), valid_ip_strategy())
# 生成有效 URL 的策略
def valid_url_strategy():
"""生成有效的 URL"""
domain = valid_domain_strategy()
return st.builds(
lambda d, path: f"https://{d}/{path}" if path else f"https://{d}",
domain,
st.one_of(
st.just(""),
st.text(
alphabet=st.characters(whitelist_categories=('L',), min_codepoint=97, max_codepoint=122),
min_size=1,
max_size=10
)
)
)
url_strategy = valid_url_strategy()
class TestPipelineTargetProviderProperties:
"""PipelineTargetProvider 属性测试类"""
@given(hosts=st.lists(host_strategy, max_size=50))
@settings(max_examples=100)
def test_property_3_hosts_round_trip(self, hosts):
"""
Property 3: PipelineTargetProvider Round-Trip (hosts)
Feature: scan-target-provider, Property 3: PipelineTargetProvider Round-Trip
**Validates: Requirements 5.1, 5.2**
For any StageOutput with hosts, PipelineTargetProvider should return
the same hosts in the same order.
"""
stage_output = StageOutput(hosts=hosts)
provider = PipelineTargetProvider(previous_output=stage_output)
result = list(provider.iter_hosts())
assert result == hosts
@given(urls=st.lists(url_strategy, max_size=50))
@settings(max_examples=100)
def test_property_3_urls_round_trip(self, urls):
"""
Property 3: PipelineTargetProvider Round-Trip (urls)
Feature: scan-target-provider, Property 3: PipelineTargetProvider Round-Trip
**Validates: Requirements 5.1, 5.2**
For any StageOutput with urls, PipelineTargetProvider should return
the same urls in the same order.
"""
stage_output = StageOutput(urls=urls)
provider = PipelineTargetProvider(previous_output=stage_output)
result = list(provider.iter_urls())
assert result == urls
@given(
hosts=st.lists(host_strategy, max_size=30),
urls=st.lists(url_strategy, max_size=30)
)
@settings(max_examples=100)
def test_property_3_combined_round_trip(self, hosts, urls):
"""
Property 3: PipelineTargetProvider Round-Trip (combined)
Feature: scan-target-provider, Property 3: PipelineTargetProvider Round-Trip
**Validates: Requirements 5.1, 5.2**
For any StageOutput with both hosts and urls, both should round-trip correctly.
"""
stage_output = StageOutput(hosts=hosts, urls=urls)
provider = PipelineTargetProvider(previous_output=stage_output)
hosts_result = list(provider.iter_hosts())
urls_result = list(provider.iter_urls())
assert hosts_result == hosts
assert urls_result == urls
class TestPipelineTargetProviderUnit:
"""PipelineTargetProvider 单元测试类"""
def test_empty_stage_output(self):
"""测试空 StageOutput 返回空迭代器 - Requirements 5.5"""
stage_output = StageOutput()
provider = PipelineTargetProvider(previous_output=stage_output)
assert list(provider.iter_hosts()) == []
assert list(provider.iter_urls()) == []
def test_blacklist_filter_returns_none(self):
"""测试黑名单过滤器返回 None - Requirements 5.3"""
stage_output = StageOutput(hosts=["example.com"])
provider = PipelineTargetProvider(previous_output=stage_output)
assert provider.get_blacklist_filter() is None
def test_target_id_association(self):
"""测试 target_id 关联 - Requirements 5.4"""
stage_output = StageOutput(hosts=["example.com"])
provider = PipelineTargetProvider(previous_output=stage_output, target_id=123)
assert provider.target_id == 123
def test_context_propagation(self):
"""测试上下文传递"""
ctx = ProviderContext(target_id=456, scan_id=789)
stage_output = StageOutput(hosts=["example.com"])
provider = PipelineTargetProvider(previous_output=stage_output, context=ctx)
assert provider.target_id == 456
assert provider.scan_id == 789
def test_previous_output_property(self):
"""测试 previous_output 属性"""
stage_output = StageOutput(hosts=["example.com"], urls=["https://example.com"])
provider = PipelineTargetProvider(previous_output=stage_output)
assert provider.previous_output is stage_output
assert provider.previous_output.hosts == ["example.com"]
assert provider.previous_output.urls == ["https://example.com"]
def test_stage_output_with_metadata(self):
"""测试带元数据的 StageOutput"""
stage_output = StageOutput(
hosts=["example.com"],
urls=["https://example.com"],
new_targets=["new.example.com"],
stats={"count": 1},
success=True,
error=None
)
provider = PipelineTargetProvider(previous_output=stage_output)
assert list(provider.iter_hosts()) == ["example.com"]
assert list(provider.iter_urls()) == ["https://example.com"]
assert provider.previous_output.new_targets == ["new.example.com"]
assert provider.previous_output.stats == {"count": 1}

View File

@@ -1,191 +0,0 @@
"""
SnapshotTargetProvider 单元测试
"""
import pytest
from unittest.mock import Mock, patch
from apps.scan.providers import SnapshotTargetProvider, ProviderContext
class TestSnapshotTargetProvider:
"""SnapshotTargetProvider 测试类"""
def test_init_with_scan_id_and_type(self):
"""测试初始化"""
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain"
)
assert provider.scan_id == 100
assert provider.snapshot_type == "subdomain"
assert provider.target_id is None # 默认 context
def test_init_with_context(self):
"""测试带 context 初始化"""
ctx = ProviderContext(target_id=1, scan_id=100)
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain",
context=ctx
)
assert provider.scan_id == 100
assert provider.target_id == 1
assert provider.snapshot_type == "subdomain"
@patch('apps.asset.services.snapshot.SubdomainSnapshotsService')
def test_iter_hosts_subdomain(self, mock_service_class):
"""测试从子域名快照迭代主机"""
# Mock service
mock_service = Mock()
mock_service.iter_subdomain_names_by_scan.return_value = iter([
"a.example.com",
"b.example.com"
])
mock_service_class.return_value = mock_service
# 创建 provider
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain"
)
# 迭代主机
hosts = list(provider.iter_hosts())
assert hosts == ["a.example.com", "b.example.com"]
mock_service.iter_subdomain_names_by_scan.assert_called_once_with(
scan_id=100,
chunk_size=1000
)
@patch('apps.asset.services.snapshot.HostPortMappingSnapshotsService')
def test_iter_hosts_host_port(self, mock_service_class):
"""测试从主机端口映射快照迭代主机"""
# Mock queryset
mock_mapping1 = Mock()
mock_mapping1.host = "example.com"
mock_mapping1.port = 80
mock_mapping2 = Mock()
mock_mapping2.host = "example.com"
mock_mapping2.port = 443
mock_queryset = Mock()
mock_queryset.iterator.return_value = iter([mock_mapping1, mock_mapping2])
# Mock service
mock_service = Mock()
mock_service.get_by_scan.return_value = mock_queryset
mock_service_class.return_value = mock_service
# 创建 provider
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="host_port"
)
# 迭代主机
hosts = list(provider.iter_hosts())
assert hosts == ["example.com:80", "example.com:443"]
mock_service.get_by_scan.assert_called_once_with(scan_id=100)
@patch('apps.asset.services.snapshot.WebsiteSnapshotsService')
def test_iter_urls_website(self, mock_service_class):
"""测试从网站快照迭代 URL"""
# Mock service
mock_service = Mock()
mock_service.iter_website_urls_by_scan.return_value = iter([
"http://example.com",
"https://example.com"
])
mock_service_class.return_value = mock_service
# 创建 provider
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="website"
)
# 迭代 URL
urls = list(provider.iter_urls())
assert urls == ["http://example.com", "https://example.com"]
mock_service.iter_website_urls_by_scan.assert_called_once_with(
scan_id=100,
chunk_size=1000
)
@patch('apps.asset.services.snapshot.EndpointSnapshotsService')
def test_iter_urls_endpoint(self, mock_service_class):
"""测试从端点快照迭代 URL"""
# Mock queryset
mock_endpoint1 = Mock()
mock_endpoint1.url = "http://example.com/api/v1"
mock_endpoint2 = Mock()
mock_endpoint2.url = "http://example.com/api/v2"
mock_queryset = Mock()
mock_queryset.iterator.return_value = iter([mock_endpoint1, mock_endpoint2])
# Mock service
mock_service = Mock()
mock_service.get_by_scan.return_value = mock_queryset
mock_service_class.return_value = mock_service
# 创建 provider
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="endpoint"
)
# 迭代 URL
urls = list(provider.iter_urls())
assert urls == ["http://example.com/api/v1", "http://example.com/api/v2"]
mock_service.get_by_scan.assert_called_once_with(scan_id=100)
def test_iter_hosts_unsupported_type(self):
"""测试不支持的快照类型iter_hosts"""
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="website" # website 不支持 iter_hosts
)
hosts = list(provider.iter_hosts())
assert hosts == []
def test_iter_urls_unsupported_type(self):
"""测试不支持的快照类型iter_urls"""
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain" # subdomain 不支持 iter_urls
)
urls = list(provider.iter_urls())
assert urls == []
def test_get_blacklist_filter(self):
"""测试黑名单过滤器(快照模式不使用黑名单)"""
provider = SnapshotTargetProvider(
scan_id=100,
snapshot_type="subdomain"
)
assert provider.get_blacklist_filter() is None
def test_context_propagation(self):
"""测试上下文传递"""
ctx = ProviderContext(target_id=456, scan_id=789)
provider = SnapshotTargetProvider(
scan_id=100, # 会被 context 覆盖
snapshot_type="subdomain",
context=ctx
)
assert provider.target_id == 456
assert provider.scan_id == 100 # scan_id 在 __init__ 中被设置

View File

@@ -1,189 +0,0 @@
#!/usr/bin/env python
"""
扫描任务启动脚本
用于动态扫描容器启动时执行。
必须在 Django 导入之前获取配置并设置环境变量。
"""
import argparse
import sys
import os
import traceback
def diagnose_prefect_environment():
"""诊断 Prefect 运行环境,输出详细信息用于排查问题"""
print("\n" + "="*60)
print("Prefect 环境诊断")
print("="*60)
# 1. 检查 Prefect 相关环境变量
print("\n[诊断] Prefect 环境变量:")
prefect_vars = [
'PREFECT_HOME',
'PREFECT_API_URL',
'PREFECT_SERVER_EPHEMERAL_ENABLED',
'PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS',
'PREFECT_SERVER_DATABASE_CONNECTION_URL',
'PREFECT_LOGGING_LEVEL',
'PREFECT_DEBUG_MODE',
]
for var in prefect_vars:
value = os.environ.get(var, 'NOT SET')
print(f" {var}={value}")
# 2. 检查 PREFECT_HOME 目录
prefect_home = os.environ.get('PREFECT_HOME', os.path.expanduser('~/.prefect'))
print(f"\n[诊断] PREFECT_HOME 目录: {prefect_home}")
if os.path.exists(prefect_home):
print(f" ✓ 目录存在")
print(f" 可写: {os.access(prefect_home, os.W_OK)}")
try:
files = os.listdir(prefect_home)
print(f" 文件列表: {files[:10]}{'...' if len(files) > 10 else ''}")
except Exception as e:
print(f" ✗ 无法列出文件: {e}")
else:
print(f" 目录不存在,尝试创建...")
try:
os.makedirs(prefect_home, exist_ok=True)
print(f" ✓ 创建成功")
except Exception as e:
print(f" ✗ 创建失败: {e}")
# 3. 检查 uvicorn 是否可用
print(f"\n[诊断] uvicorn 可用性:")
import shutil
uvicorn_path = shutil.which('uvicorn')
if uvicorn_path:
print(f" ✓ uvicorn 路径: {uvicorn_path}")
else:
print(f" ✗ uvicorn 不在 PATH 中")
print(f" PATH: {os.environ.get('PATH', 'NOT SET')}")
# 4. 检查 Prefect 版本
print(f"\n[诊断] Prefect 版本:")
try:
import prefect
print(f" ✓ prefect=={prefect.__version__}")
except Exception as e:
print(f" ✗ 无法导入 prefect: {e}")
# 5. 检查 SQLite 支持
print(f"\n[诊断] SQLite 支持:")
try:
import sqlite3
print(f" ✓ sqlite3 版本: {sqlite3.sqlite_version}")
# 测试创建数据库
test_db = os.path.join(prefect_home, 'test.db')
conn = sqlite3.connect(test_db)
conn.execute('CREATE TABLE IF NOT EXISTS test (id INTEGER)')
conn.close()
os.remove(test_db)
print(f" ✓ SQLite 读写测试通过")
except Exception as e:
print(f" ✗ SQLite 测试失败: {e}")
# 6. 检查端口绑定能力
print(f"\n[诊断] 端口绑定测试:")
try:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('127.0.0.1', 0))
port = sock.getsockname()[1]
sock.close()
print(f" ✓ 可以绑定 127.0.0.1 端口 (测试端口: {port})")
except Exception as e:
print(f" ✗ 端口绑定失败: {e}")
# 7. 检查内存情况
print(f"\n[诊断] 系统资源:")
try:
import psutil
mem = psutil.virtual_memory()
print(f" 内存总量: {mem.total / 1024 / 1024:.0f} MB")
print(f" 可用内存: {mem.available / 1024 / 1024:.0f} MB")
print(f" 内存使用率: {mem.percent}%")
except ImportError:
print(f" psutil 未安装,跳过内存检查")
except Exception as e:
print(f" ✗ 资源检查失败: {e}")
print("\n" + "="*60)
print("诊断完成")
print("="*60 + "\n")
def main():
print("="*60)
print("run_initiate_scan.py 启动")
print(f" Python: {sys.version}")
print(f" CWD: {os.getcwd()}")
print(f" SERVER_URL: {os.environ.get('SERVER_URL', 'NOT SET')}")
print("="*60)
# 1. 从配置中心获取配置并初始化 Django必须在 Django 导入之前)
print("[1/4] 从配置中心获取配置...")
try:
from apps.common.container_bootstrap import fetch_config_and_setup_django
fetch_config_and_setup_django()
print("[1/4] ✓ 配置获取成功")
except Exception as e:
print(f"[1/4] ✗ 配置获取失败: {e}")
traceback.print_exc()
sys.exit(1)
# 2. 解析命令行参数
print("[2/4] 解析命令行参数...")
parser = argparse.ArgumentParser(description="执行扫描初始化 Flow")
parser.add_argument("--scan_id", type=int, required=True, help="扫描任务 ID")
parser.add_argument("--target_name", type=str, required=True, help="目标名称")
parser.add_argument("--target_id", type=int, required=True, help="目标 ID")
parser.add_argument("--scan_workspace_dir", type=str, required=True, help="扫描工作目录")
parser.add_argument("--engine_name", type=str, required=True, help="引擎名称")
parser.add_argument("--scheduled_scan_name", type=str, default=None, help="定时扫描任务名称(可选)")
args = parser.parse_args()
print(f"[2/4] ✓ 参数解析成功:")
print(f" scan_id: {args.scan_id}")
print(f" target_name: {args.target_name}")
print(f" target_id: {args.target_id}")
print(f" scan_workspace_dir: {args.scan_workspace_dir}")
print(f" engine_name: {args.engine_name}")
print(f" scheduled_scan_name: {args.scheduled_scan_name}")
# 2.5. 运行 Prefect 环境诊断(仅在 DEBUG 模式下)
if os.environ.get('DEBUG', '').lower() == 'true':
diagnose_prefect_environment()
# 3. 现在可以安全导入 Django 相关模块
print("[3/4] 导入 initiate_scan_flow...")
try:
from apps.scan.flows.initiate_scan_flow import initiate_scan_flow
print("[3/4] ✓ 导入成功")
except Exception as e:
print(f"[3/4] ✗ 导入失败: {e}")
traceback.print_exc()
sys.exit(1)
# 4. 执行 Flow
print("[4/4] 执行 initiate_scan_flow...")
try:
result = initiate_scan_flow(
scan_id=args.scan_id,
target_name=args.target_name,
target_id=args.target_id,
scan_workspace_dir=args.scan_workspace_dir,
engine_name=args.engine_name,
scheduled_scan_name=args.scheduled_scan_name,
)
print("[4/4] ✓ Flow 执行完成")
print(f"结果: {result}")
except Exception as e:
print(f"[4/4] ✗ Flow 执行失败: {e}")
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,295 +0,0 @@
"""
快速扫描服务
负责解析用户输入URL、域名、IP、CIDR并创建对应的资产数据
"""
import logging
from dataclasses import dataclass
from typing import Optional, Literal, List, Dict, Any
from urllib.parse import urlparse
from django.db import transaction
from apps.common.validators import validate_url, detect_input_type, validate_domain, validate_ip, validate_cidr, is_valid_ip
from apps.targets.services.target_service import TargetService
from apps.targets.models import Target
from apps.asset.dtos import WebSiteDTO
from apps.asset.dtos.asset import EndpointDTO
from apps.asset.repositories.asset.website_repository import DjangoWebSiteRepository
from apps.asset.repositories.asset.endpoint_repository import DjangoEndpointRepository
logger = logging.getLogger(__name__)
@dataclass
class ParsedInputDTO:
"""
解析输入 DTO
只在快速扫描流程中使用
"""
original_input: str
input_type: Literal['url', 'domain', 'ip', 'cidr']
target_name: str # host/domain/ip/cidr
target_type: Literal['domain', 'ip', 'cidr']
website_url: Optional[str] = None # 根 URLscheme://host[:port]
endpoint_url: Optional[str] = None # 完整 URL含路径
is_valid: bool = True
error: Optional[str] = None
line_number: Optional[int] = None
class QuickScanService:
"""快速扫描服务 - 解析输入并创建资产"""
def __init__(self):
self.target_service = TargetService()
self.website_repo = DjangoWebSiteRepository()
self.endpoint_repo = DjangoEndpointRepository()
def parse_inputs(self, inputs: List[str]) -> List[ParsedInputDTO]:
"""
解析多行输入
Args:
inputs: 输入字符串列表(每行一个)
Returns:
解析结果列表(跳过空行)
"""
results = []
for line_number, input_str in enumerate(inputs, start=1):
input_str = input_str.strip()
# 空行跳过
if not input_str:
continue
try:
# 检测输入类型
input_type = detect_input_type(input_str)
if input_type == 'url':
dto = self._parse_url_input(input_str, line_number)
else:
dto = self._parse_target_input(input_str, input_type, line_number)
results.append(dto)
except ValueError as e:
# 解析失败,记录错误
results.append(ParsedInputDTO(
original_input=input_str,
input_type='domain', # 默认类型
target_name=input_str,
target_type='domain',
is_valid=False,
error=str(e),
line_number=line_number
))
return results
def _parse_url_input(self, url_str: str, line_number: int) -> ParsedInputDTO:
"""
解析 URL 输入
Args:
url_str: URL 字符串
line_number: 行号
Returns:
ParsedInputDTO
"""
# 验证 URL 格式
validate_url(url_str)
# 使用标准库解析
parsed = urlparse(url_str)
host = parsed.hostname # 不含端口
has_path = parsed.path and parsed.path != '/'
# 构建 root_url: scheme://host[:port]
root_url = f"{parsed.scheme}://{parsed.netloc}"
# 检测 host 类型domain 或 ip
target_type = 'ip' if is_valid_ip(host) else 'domain'
return ParsedInputDTO(
original_input=url_str,
input_type='url',
target_name=host,
target_type=target_type,
website_url=root_url,
endpoint_url=url_str if has_path else None,
line_number=line_number
)
def _parse_target_input(
self,
input_str: str,
input_type: str,
line_number: int
) -> ParsedInputDTO:
"""
解析非 URL 输入domain/ip/cidr
Args:
input_str: 输入字符串
input_type: 输入类型
line_number: 行号
Returns:
ParsedInputDTO
"""
# 验证格式
if input_type == 'domain':
validate_domain(input_str)
target_type = 'domain'
elif input_type == 'ip':
validate_ip(input_str)
target_type = 'ip'
elif input_type == 'cidr':
validate_cidr(input_str)
target_type = 'cidr'
else:
raise ValueError(f"未知的输入类型: {input_type}")
return ParsedInputDTO(
original_input=input_str,
input_type=input_type,
target_name=input_str,
target_type=target_type,
website_url=None,
endpoint_url=None,
line_number=line_number
)
@transaction.atomic
def process_quick_scan(
self,
inputs: List[str],
engine_id: int
) -> Dict[str, Any]:
"""
处理快速扫描请求
Args:
inputs: 输入字符串列表
engine_id: 扫描引擎 ID
Returns:
处理结果字典
"""
# 1. 解析输入
parsed_inputs = self.parse_inputs(inputs)
# 分离有效和无效输入
valid_inputs = [p for p in parsed_inputs if p.is_valid]
invalid_inputs = [p for p in parsed_inputs if not p.is_valid]
if not valid_inputs:
return {
'targets': [],
'target_stats': {'created': 0, 'reused': 0, 'failed': len(invalid_inputs)},
'asset_stats': {'websites_created': 0, 'endpoints_created': 0},
'errors': [
{'line_number': p.line_number, 'input': p.original_input, 'error': p.error}
for p in invalid_inputs
]
}
# 2. 创建资产
asset_result = self.create_assets_from_parsed_inputs(valid_inputs)
# 3. 返回结果
return {
'targets': asset_result['targets'],
'target_stats': asset_result['target_stats'],
'asset_stats': asset_result['asset_stats'],
'errors': [
{'line_number': p.line_number, 'input': p.original_input, 'error': p.error}
for p in invalid_inputs
]
}
def create_assets_from_parsed_inputs(
self,
parsed_inputs: List[ParsedInputDTO]
) -> Dict[str, Any]:
"""
从解析结果创建资产
Args:
parsed_inputs: 解析结果列表(只包含有效输入)
Returns:
创建结果字典
"""
# 1. 收集所有 target 数据(内存操作,去重)
targets_data = {}
for dto in parsed_inputs:
if dto.target_name not in targets_data:
targets_data[dto.target_name] = {'name': dto.target_name, 'type': dto.target_type}
targets_list = list(targets_data.values())
# 2. 批量创建 Target复用现有方法
target_result = self.target_service.batch_create_targets(targets_list)
# 3. 查询刚创建的 Target建立 name → id 映射
target_names = [d['name'] for d in targets_list]
targets = Target.objects.filter(name__in=target_names)
target_id_map = {t.name: t.id for t in targets}
# 4. 收集 Website DTO内存操作去重
website_dtos = []
seen_websites = set()
for dto in parsed_inputs:
if dto.website_url and dto.website_url not in seen_websites:
seen_websites.add(dto.website_url)
target_id = target_id_map.get(dto.target_name)
if target_id:
website_dtos.append(WebSiteDTO(
target_id=target_id,
url=dto.website_url,
host=dto.target_name
))
# 5. 批量创建 Website存在即跳过
websites_created = 0
if website_dtos:
websites_created = self.website_repo.bulk_create_ignore_conflicts(website_dtos)
# 6. 收集 Endpoint DTO内存操作去重
endpoint_dtos = []
seen_endpoints = set()
for dto in parsed_inputs:
if dto.endpoint_url and dto.endpoint_url not in seen_endpoints:
seen_endpoints.add(dto.endpoint_url)
target_id = target_id_map.get(dto.target_name)
if target_id:
endpoint_dtos.append(EndpointDTO(
target_id=target_id,
url=dto.endpoint_url,
host=dto.target_name
))
# 7. 批量创建 Endpoint存在即跳过
endpoints_created = 0
if endpoint_dtos:
endpoints_created = self.endpoint_repo.bulk_create_ignore_conflicts(endpoint_dtos)
return {
'targets': list(targets),
'target_stats': {
'created': target_result['created_count'],
'reused': 0, # bulk_create 无法区分新建和复用
'failed': target_result['failed_count']
},
'asset_stats': {
'websites_created': websites_created,
'endpoints_created': endpoints_created
}
}

View File

@@ -1,258 +0,0 @@
"""
扫描任务服务
负责 Scan 模型的所有业务逻辑
"""
from __future__ import annotations
import logging
import uuid
from typing import Dict, List, TYPE_CHECKING
from datetime import datetime
from pathlib import Path
from django.conf import settings
from django.db import transaction
from django.db.utils import DatabaseError, IntegrityError, OperationalError
from django.core.exceptions import ValidationError, ObjectDoesNotExist
from apps.scan.models import Scan
from apps.scan.repositories import DjangoScanRepository
from apps.targets.repositories import DjangoTargetRepository, DjangoOrganizationRepository
from apps.engine.repositories import DjangoEngineRepository
from apps.targets.models import Target
from apps.engine.models import ScanEngine
from apps.common.definitions import ScanStatus
logger = logging.getLogger(__name__)
class ScanService:
"""
扫描任务服务(协调者)
职责:
- 协调各个子服务
- 提供统一的公共接口
- 保持向后兼容
注意:
- 具体业务逻辑已拆分到子服务
- 本类主要负责委托和协调
"""
# 终态集合:这些状态一旦设置,不应该被覆盖
FINAL_STATUSES = {
ScanStatus.COMPLETED,
ScanStatus.FAILED,
ScanStatus.CANCELLED
}
def __init__(self):
"""
初始化服务
"""
# 初始化子服务
from apps.scan.services.scan_creation_service import ScanCreationService
from apps.scan.services.scan_state_service import ScanStateService
from apps.scan.services.scan_control_service import ScanControlService
from apps.scan.services.scan_stats_service import ScanStatsService
self.creation_service = ScanCreationService()
self.state_service = ScanStateService()
self.control_service = ScanControlService()
self.stats_service = ScanStatsService()
# 保留 ScanRepository用于 get_scan 方法)
self.scan_repo = DjangoScanRepository()
def get_scan(self, scan_id: int, prefetch_relations: bool) -> Scan | None:
"""
获取扫描任务(包含关联对象)
自动预加载 engine 和 target避免 N+1 查询问题
Args:
scan_id: 扫描任务 ID
Returns:
Scan 对象(包含 engine 和 target或 None
"""
return self.scan_repo.get_by_id(scan_id, prefetch_relations)
def get_all_scans(self, prefetch_relations: bool = True):
return self.scan_repo.get_all(prefetch_relations=prefetch_relations)
def prepare_initiate_scan(
self,
organization_id: int | None = None,
target_id: int | None = None,
engine_id: int | None = None
) -> tuple[List[Target], ScanEngine]:
"""
为创建扫描任务做准备,返回所需的目标列表和扫描引擎
"""
return self.creation_service.prepare_initiate_scan(
organization_id, target_id, engine_id
)
def prepare_initiate_scan_multi_engine(
self,
organization_id: int | None = None,
target_id: int | None = None,
engine_ids: List[int] | None = None
) -> tuple[List[Target], str, List[str], List[int]]:
"""
为创建多引擎扫描任务做准备
Returns:
(目标列表, 合并配置, 引擎名称列表, 引擎ID列表)
"""
return self.creation_service.prepare_initiate_scan_multi_engine(
organization_id, target_id, engine_ids
)
def create_scans(
self,
targets: List[Target],
engine_ids: List[int],
engine_names: List[str],
yaml_configuration: str,
scheduled_scan_name: str | None = None
) -> List[Scan]:
"""批量创建扫描任务(委托给 ScanCreationService"""
return self.creation_service.create_scans(
targets, engine_ids, engine_names, yaml_configuration, scheduled_scan_name
)
# ==================== 状态管理方法(委托给 ScanStateService ====================
def update_status(
self,
scan_id: int,
status: ScanStatus,
error_message: str | None = None,
stopped_at: datetime | None = None
) -> bool:
"""更新 Scan 状态(委托给 ScanStateService"""
return self.state_service.update_status(
scan_id, status, error_message, stopped_at
)
def update_status_if_match(
self,
scan_id: int,
current_status: ScanStatus,
new_status: ScanStatus,
stopped_at: datetime | None = None
) -> bool:
"""条件更新 Scan 状态(委托给 ScanStateService"""
return self.state_service.update_status_if_match(
scan_id, current_status, new_status, stopped_at
)
def update_cached_stats(self, scan_id: int) -> dict | None:
"""更新缓存统计数据(委托给 ScanStateService返回统计数据字典"""
return self.state_service.update_cached_stats(scan_id)
# ==================== 进度跟踪方法(委托给 ScanStateService ====================
def init_stage_progress(self, scan_id: int, stages: list[str]) -> bool:
"""初始化阶段进度(委托给 ScanStateService"""
return self.state_service.init_stage_progress(scan_id, stages)
def start_stage(self, scan_id: int, stage: str) -> bool:
"""开始执行某个阶段(委托给 ScanStateService"""
return self.state_service.start_stage(scan_id, stage)
def complete_stage(self, scan_id: int, stage: str, detail: str | None = None) -> bool:
"""完成某个阶段(委托给 ScanStateService"""
return self.state_service.complete_stage(scan_id, stage, detail)
def fail_stage(self, scan_id: int, stage: str, error: str | None = None) -> bool:
"""标记某个阶段失败(委托给 ScanStateService"""
return self.state_service.fail_stage(scan_id, stage, error)
def cancel_running_stages(self, scan_id: int, final_status: str = "cancelled") -> bool:
"""取消所有正在运行的阶段(委托给 ScanStateService"""
return self.state_service.cancel_running_stages(scan_id, final_status)
# TODO待接入
def add_command_to_scan(self, scan_id: int, stage_name: str, tool_name: str, command: str) -> bool:
"""
增量添加命令到指定扫描阶段
Args:
scan_id: 扫描任务ID
stage_name: 阶段名称(如 'subdomain_discovery', 'port_scan'
tool_name: 工具名称
command: 执行命令
Returns:
bool: 是否成功添加
"""
try:
scan = self.get_scan(scan_id, prefetch_relations=False)
if not scan:
logger.error(f"扫描任务不存在: {scan_id}")
return False
stage_progress = scan.stage_progress or {}
# 确保指定阶段存在
if stage_name not in stage_progress:
stage_progress[stage_name] = {'status': 'running', 'commands': []}
# 确保 commands 列表存在
if 'commands' not in stage_progress[stage_name]:
stage_progress[stage_name]['commands'] = []
# 增量添加命令
command_entry = f"{tool_name}: {command}"
stage_progress[stage_name]['commands'].append(command_entry)
scan.stage_progress = stage_progress
scan.save(update_fields=['stage_progress'])
command_count = len(stage_progress[stage_name]['commands'])
logger.info(f"✓ 记录命令: {stage_name}.{tool_name} (总计: {command_count})")
return True
except Exception as e:
logger.error(f"记录命令失败: {e}")
return False
# ==================== 删除和控制方法(委托给 ScanControlService ====================
def delete_scans_two_phase(self, scan_ids: List[int]) -> dict:
"""两阶段删除扫描任务(委托给 ScanControlService"""
return self.control_service.delete_scans_two_phase(scan_ids)
def stop_scan(self, scan_id: int) -> tuple[bool, int]:
"""停止扫描任务(委托给 ScanControlService"""
return self.control_service.stop_scan(scan_id)
def hard_delete_scans(self, scan_ids: List[int]) -> tuple[int, Dict[str, int]]:
"""
硬删除扫描任务(真正删除数据)
用于 Worker 容器中执行,删除已软删除的扫描及其关联数据。
Args:
scan_ids: 扫描任务 ID 列表
Returns:
(删除数量, 详情字典)
"""
return self.scan_repo.hard_delete_by_ids(scan_ids)
# ==================== 统计方法(委托给 ScanStatsService ====================
def get_statistics(self) -> dict:
"""获取扫描统计数据(委托给 ScanStatsService"""
return self.stats_service.get_statistics()
# 导出接口
__all__ = ['ScanService']

View File

@@ -1,613 +0,0 @@
"""
目标导出服务
提供统一的目标提取和文件导出功能,支持:
- URL 导出(纯导出,不做隐式回退)
- 默认 URL 生成(独立方法)
- 带回退链的 URL 导出(用例层编排)
- 域名/IP 导出(用于端口扫描)
- 黑名单过滤集成
"""
import ipaddress
import logging
from pathlib import Path
from typing import Dict, Any, Optional, List, Iterator, Tuple
from django.db.models import QuerySet
from apps.common.utils import BlacklistFilter
logger = logging.getLogger(__name__)
class DataSource:
"""数据源类型常量"""
ENDPOINT = "endpoint"
WEBSITE = "website"
HOST_PORT = "host_port"
DEFAULT = "default"
def create_export_service(target_id: int) -> 'TargetExportService':
"""
工厂函数:创建带黑名单过滤的导出服务
Args:
target_id: 目标 ID用于加载黑名单规则
Returns:
TargetExportService: 配置好黑名单过滤器的导出服务实例
"""
from apps.common.services import BlacklistService
rules = BlacklistService().get_rules(target_id)
blacklist_filter = BlacklistFilter(rules)
return TargetExportService(blacklist_filter=blacklist_filter)
def _iter_default_urls_from_target(
target_id: int,
blacklist_filter: Optional[BlacklistFilter] = None
) -> Iterator[str]:
"""
内部生成器:从 Target 本身生成默认 URL
根据 Target 类型生成 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
- URL: 直接使用目标 URL
Args:
target_id: 目标 ID
blacklist_filter: 黑名单过滤器
Yields:
str: URL
"""
from apps.targets.services import TargetService
from apps.targets.models import Target
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return
target_name = target.name
target_type = target.type
# 根据 Target 类型生成 URL
if target_type == Target.TargetType.DOMAIN:
urls = [f"http://{target_name}", f"https://{target_name}"]
elif target_type == Target.TargetType.IP:
urls = [f"http://{target_name}", f"https://{target_name}"]
elif target_type == Target.TargetType.CIDR:
try:
network = ipaddress.ip_network(target_name, strict=False)
urls = []
for ip in network.hosts():
urls.extend([f"http://{ip}", f"https://{ip}"])
# /32 或 /128 特殊处理
if not urls:
ip = str(network.network_address)
urls = [f"http://{ip}", f"https://{ip}"]
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
return
elif target_type == Target.TargetType.URL:
urls = [target_name]
else:
logger.warning("不支持的 Target 类型: %s", target_type)
return
# 过滤并产出
for url in urls:
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
yield url
def _iter_urls_with_fallback(
target_id: int,
sources: List[str],
blacklist_filter: Optional[BlacklistFilter] = None,
batch_size: int = 1000,
tried_sources: Optional[List[str]] = None
) -> Iterator[Tuple[str, str]]:
"""
内部生成器:流式产出 URL带回退链
按 sources 顺序尝试每个数据源,直到有数据返回。
回退逻辑:
- 数据源有数据且通过过滤 → 产出 URL停止回退
- 数据源有数据但全被过滤 → 不回退,停止(避免意外暴露)
- 数据源为空 → 继续尝试下一个
Args:
target_id: 目标 ID
sources: 数据源优先级列表
blacklist_filter: 黑名单过滤器
batch_size: 批次大小
tried_sources: 可选,用于记录尝试过的数据源(外部传入列表,会被修改)
Yields:
Tuple[str, str]: (url, source) - URL 和来源标识
"""
from apps.asset.models import Endpoint, WebSite
for source in sources:
if tried_sources is not None:
tried_sources.append(source)
has_output = False # 是否有输出(通过过滤的)
has_raw_data = False # 是否有原始数据(过滤前)
if source == DataSource.DEFAULT:
# 默认 URL 生成(从 Target 本身构造,复用共用生成器)
for url in _iter_default_urls_from_target(target_id, blacklist_filter):
has_raw_data = True
has_output = True
yield url, source
# 检查是否有原始数据(需要单独判断,因为生成器可能被过滤后为空)
if not has_raw_data:
# 再次检查 Target 是否存在
from apps.targets.services import TargetService
target = TargetService().get_target(target_id)
has_raw_data = target is not None
if has_raw_data:
if not has_output:
logger.info("%s 有数据但全被黑名单过滤,不回退", source)
return
continue
# 构建对应数据源的 queryset
if source == DataSource.ENDPOINT:
queryset = Endpoint.objects.filter(target_id=target_id).values_list('url', flat=True)
elif source == DataSource.WEBSITE:
queryset = WebSite.objects.filter(target_id=target_id).values_list('url', flat=True)
else:
logger.warning("未知的数据源类型: %s,跳过", source)
continue
for url in queryset.iterator(chunk_size=batch_size):
if url:
has_raw_data = True
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
has_output = True
yield url, source
# 有原始数据就停止(不管是否被过滤)
if has_raw_data:
if not has_output:
logger.info("%s 有数据但全被黑名单过滤,不回退", source)
return
logger.info("%s 为空,尝试下一个数据源", source)
def get_urls_with_fallback(
target_id: int,
sources: List[str],
batch_size: int = 1000
) -> Dict[str, Any]:
"""
带回退链的 URL 获取用例函数(返回列表)
按 sources 顺序尝试每个数据源,直到有数据返回。
Args:
target_id: 目标 ID
sources: 数据源优先级列表,如 ["website", "endpoint", "default"]
batch_size: 批次大小
Returns:
dict: {
'success': bool,
'urls': List[str],
'total_count': int,
'source': str, # 实际使用的数据源
'tried_sources': List[str], # 尝试过的数据源
}
"""
from apps.common.services import BlacklistService
rules = BlacklistService().get_rules(target_id)
blacklist_filter = BlacklistFilter(rules)
urls = []
actual_source = 'none'
tried_sources = []
for url, source in _iter_urls_with_fallback(target_id, sources, blacklist_filter, batch_size, tried_sources):
urls.append(url)
actual_source = source
if urls:
logger.info("%s 获取 %d 条 URL", actual_source, len(urls))
else:
logger.warning("所有数据源都为空,无法获取 URL")
return {
'success': True,
'urls': urls,
'total_count': len(urls),
'source': actual_source,
'tried_sources': tried_sources,
}
def export_urls_with_fallback(
target_id: int,
output_file: str,
sources: List[str],
batch_size: int = 1000
) -> Dict[str, Any]:
"""
带回退链的 URL 导出用例函数(写入文件)
按 sources 顺序尝试每个数据源,直到有数据返回。
流式写入,内存占用 O(1)。
Args:
target_id: 目标 ID
output_file: 输出文件路径
sources: 数据源优先级列表,如 ["endpoint", "website", "default"]
batch_size: 批次大小
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int,
'source': str, # 实际使用的数据源
'tried_sources': List[str], # 尝试过的数据源
}
"""
from apps.common.services import BlacklistService
rules = BlacklistService().get_rules(target_id)
blacklist_filter = BlacklistFilter(rules)
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
total_count = 0
actual_source = 'none'
tried_sources = []
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url, source in _iter_urls_with_fallback(target_id, sources, blacklist_filter, batch_size, tried_sources):
f.write(f"{url}\n")
total_count += 1
actual_source = source
if total_count % 10000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
if total_count > 0:
logger.info("%s 导出 %d 条 URL 到 %s", actual_source, total_count, output_file)
else:
logger.warning("所有数据源都为空,无法导出 URL")
return {
'success': True,
'output_file': str(output_path),
'total_count': total_count,
'source': actual_source,
'tried_sources': tried_sources,
}
class TargetExportService:
"""
目标导出服务 - 提供统一的目标提取和文件导出功能
使用方式:
# 方式 1使用用例函数推荐
from apps.scan.services.target_export_service import export_urls_with_fallback, DataSource
result = export_urls_with_fallback(
target_id=1,
output_file='/path/to/output.txt',
sources=[DataSource.ENDPOINT, DataSource.WEBSITE, DataSource.DEFAULT]
)
# 方式 2直接使用 Service纯导出不带回退
export_service = create_export_service(target_id)
result = export_service.export_urls(target_id, output_path, queryset)
"""
def __init__(self, blacklist_filter: Optional[BlacklistFilter] = None):
"""
初始化导出服务
Args:
blacklist_filter: 黑名单过滤器None 表示禁用过滤
"""
self.blacklist_filter = blacklist_filter
def export_urls(
self,
target_id: int,
output_path: str,
queryset: QuerySet,
url_field: str = 'url',
batch_size: int = 1000
) -> Dict[str, Any]:
"""
纯 URL 导出函数 - 只负责将 queryset 数据写入文件
不做任何隐式回退或默认 URL 生成。
Args:
target_id: 目标 ID
output_path: 输出文件路径
queryset: 数据源 queryset由调用方构建应为 values_list flat=True
url_field: URL 字段名(用于黑名单过滤)
batch_size: 批次大小
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int, # 实际写入数量
'queryset_count': int, # 原始数据数量(迭代计数)
'filtered_count': int, # 被黑名单过滤的数量
}
Raises:
IOError: 文件写入失败
"""
output_file = Path(output_path)
output_file.parent.mkdir(parents=True, exist_ok=True)
logger.info("开始导出 URL - target_id=%s, output=%s", target_id, output_path)
total_count = 0
filtered_count = 0
queryset_count = 0
try:
with open(output_file, 'w', encoding='utf-8', buffering=8192) as f:
for url in queryset.iterator(chunk_size=batch_size):
queryset_count += 1
if url:
# 黑名单过滤
if self.blacklist_filter and not self.blacklist_filter.is_allowed(url):
filtered_count += 1
continue
f.write(f"{url}\n")
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
except IOError as e:
logger.error("文件写入失败: %s - %s", output_path, e)
raise
if filtered_count > 0:
logger.info("黑名单过滤: 过滤 %d 个 URL", filtered_count)
logger.info(
"✓ URL 导出完成 - 写入: %d, 原始: %d, 过滤: %d, 文件: %s",
total_count, queryset_count, filtered_count, output_path
)
return {
'success': True,
'output_file': str(output_file),
'total_count': total_count,
'queryset_count': queryset_count,
'filtered_count': filtered_count,
}
def generate_default_urls(
self,
target_id: int,
output_path: str
) -> Dict[str, Any]:
"""
默认 URL 生成器
根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
- URL: 直接使用目标 URL
Args:
target_id: 目标 ID
output_path: 输出文件路径
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int,
}
"""
output_file = Path(output_path)
output_file.parent.mkdir(parents=True, exist_ok=True)
logger.info("生成默认 URL - target_id=%d", target_id)
total_urls = 0
with open(output_file, 'w', encoding='utf-8', buffering=8192) as f:
for url in _iter_default_urls_from_target(target_id, self.blacklist_filter):
f.write(f"{url}\n")
total_urls += 1
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
logger.info("✓ 默认 URL 生成完成 - 数量: %d", total_urls)
return {
'success': True,
'output_file': str(output_file),
'total_count': total_urls,
}
def export_hosts(
self,
target_id: int,
output_path: str,
batch_size: int = 1000
) -> Dict[str, Any]:
"""
主机列表导出函数(用于端口扫描)
根据 Target 类型选择导出逻辑:
- DOMAIN: 从 Subdomain 表流式导出子域名
- IP: 直接写入 IP 地址
- CIDR: 展开为所有主机 IP
Args:
target_id: 目标 ID
output_path: 输出文件路径
batch_size: 批次大小
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int,
'target_type': str
}
"""
from apps.targets.services import TargetService
from apps.targets.models import Target
output_file = Path(output_path)
output_file.parent.mkdir(parents=True, exist_ok=True)
# 获取 Target 信息
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
raise ValueError(f"Target ID {target_id} 不存在")
target_type = target.type
target_name = target.name
logger.info(
"开始导出主机列表 - Target ID: %d, Name: %s, Type: %s, 输出文件: %s",
target_id, target_name, target_type, output_path
)
total_count = 0
if target_type == Target.TargetType.DOMAIN:
total_count = self._export_domains(target_id, target_name, output_file, batch_size)
type_desc = "域名"
elif target_type == Target.TargetType.IP:
total_count = self._export_ip(target_name, output_file)
type_desc = "IP"
elif target_type == Target.TargetType.CIDR:
total_count = self._export_cidr(target_name, output_file)
type_desc = "CIDR IP"
else:
raise ValueError(f"不支持的目标类型: {target_type}")
logger.info(
"✓ 主机列表导出完成 - 类型: %s, 总数: %d, 文件: %s",
type_desc, total_count, output_path
)
return {
'success': True,
'output_file': str(output_file),
'total_count': total_count,
'target_type': target_type
}
def _export_domains(
self,
target_id: int,
target_name: str,
output_path: Path,
batch_size: int
) -> int:
"""导出域名类型目标的根域名 + 子域名"""
from apps.asset.services.asset.subdomain_service import SubdomainService
subdomain_service = SubdomainService()
domain_iterator = subdomain_service.iter_subdomain_names_by_target(
target_id=target_id,
chunk_size=batch_size
)
total_count = 0
written_domains = set() # 去重(子域名表可能已包含根域名)
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
# 1. 先写入根域名
if self._should_write_target(target_name):
f.write(f"{target_name}\n")
written_domains.add(target_name)
total_count += 1
# 2. 再写入子域名(跳过已写入的根域名)
for domain_name in domain_iterator:
if domain_name in written_domains:
continue
if self._should_write_target(domain_name):
f.write(f"{domain_name}\n")
written_domains.add(domain_name)
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个域名...", total_count)
return total_count
def _export_ip(self, target_name: str, output_path: Path) -> int:
"""导出 IP 类型目标"""
if self._should_write_target(target_name):
with open(output_path, 'w', encoding='utf-8') as f:
f.write(f"{target_name}\n")
return 1
return 0
def _export_cidr(self, target_name: str, output_path: Path) -> int:
"""导出 CIDR 类型目标,展开为每个 IP"""
network = ipaddress.ip_network(target_name, strict=False)
total_count = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for ip in network.hosts():
ip_str = str(ip)
if self._should_write_target(ip_str):
f.write(f"{ip_str}\n")
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个 IP...", total_count)
# /32 或 /128 特殊处理
if total_count == 0:
ip_str = str(network.network_address)
if self._should_write_target(ip_str):
with open(output_path, 'w', encoding='utf-8') as f:
f.write(f"{ip_str}\n")
total_count = 1
return total_count
def _should_write_target(self, target: str) -> bool:
"""检查目标是否应该写入(通过黑名单过滤)"""
if self.blacklist_filter:
return self.blacklist_filter.is_allowed(target)
return True

View File

@@ -1,116 +0,0 @@
"""
导出站点 URL 到 TXT 文件的 Task
支持两种模式:
1. 传统模式(向后兼容):使用 target_id 从数据库导出
2. Provider 模式:使用 TargetProvider 从任意数据源导出
数据源: WebSite.url → Default
"""
import logging
from typing import Optional
from pathlib import Path
from prefect import task
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
from apps.scan.providers import TargetProvider
logger = logging.getLogger(__name__)
@task(name="export_sites")
def export_sites_task(
target_id: Optional[int] = None,
output_file: str = "",
provider: Optional[TargetProvider] = None,
batch_size: int = 1000,
) -> dict:
"""
导出目标下的所有站点 URL 到 TXT 文件
支持两种模式:
1. 传统模式(向后兼容):传入 target_id从数据库导出
2. Provider 模式:传入 provider从任意数据源导出
数据源优先级(回退链,仅传统模式):
1. WebSite 表 - 站点级别 URL
2. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
target_id: 目标 ID传统模式向后兼容
output_file: 输出文件路径(绝对路径)
provider: TargetProvider 实例(新模式)
batch_size: 每次读取的批次大小,默认 1000
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int
}
Raises:
ValueError: 参数错误
IOError: 文件写入失败
"""
# 参数验证:至少提供一个
if target_id is None and provider is None:
raise ValueError("必须提供 target_id 或 provider 参数之一")
# Provider 模式:使用 TargetProvider 导出
if provider is not None:
logger.info("使用 Provider 模式 - Provider: %s", type(provider).__name__)
return _export_with_provider(output_file, provider)
# 传统模式:使用 export_urls_with_fallback
logger.info("使用传统模式 - Target ID: %d", target_id)
result = export_urls_with_fallback(
target_id=target_id,
output_file=output_file,
sources=[DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"站点 URL 导出完成 - source=%s, count=%d",
result['source'], result['total_count']
)
# 保持返回值格式不变(向后兼容)
return {
'success': result['success'],
'output_file': result['output_file'],
'total_count': result['total_count'],
}
def _export_with_provider(output_file: str, provider: TargetProvider) -> dict:
"""使用 Provider 导出 URL"""
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
total_count = 0
blacklist_filter = provider.get_blacklist_filter()
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url in provider.iter_urls():
# 应用黑名单过滤(如果有)
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
f.write(f"{url}\n")
total_count += 1
if total_count % 1000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
logger.info("✓ URL 导出完成 - 总数: %d, 文件: %s", total_count, str(output_path))
return {
'success': True,
'output_file': str(output_path),
'total_count': total_count,
}

View File

@@ -1,112 +0,0 @@
"""
导出 URL 任务
支持两种模式:
1. 传统模式(向后兼容):使用 target_id 从数据库导出
2. Provider 模式:使用 TargetProvider 从任意数据源导出
用于指纹识别前导出目标下的 URL 到文件
"""
import logging
from typing import Optional
from pathlib import Path
from prefect import task
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
from apps.scan.providers import TargetProvider, DatabaseTargetProvider
logger = logging.getLogger(__name__)
@task(name="export_urls_for_fingerprint")
def export_urls_for_fingerprint_task(
target_id: Optional[int] = None,
output_file: str = "",
source: str = 'website', # 保留参数,兼容旧调用(实际值由回退链决定)
provider: Optional[TargetProvider] = None,
batch_size: int = 1000
) -> dict:
"""
导出目标下的 URL 到文件(用于指纹识别)
支持两种模式:
1. 传统模式(向后兼容):传入 target_id从数据库导出
2. Provider 模式:传入 provider从任意数据源导出
数据源优先级(回退链,仅传统模式):
1. WebSite 表 - 站点级别 URL
2. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
target_id: 目标 ID传统模式向后兼容
output_file: 输出文件路径
source: 数据源类型(保留参数,兼容旧调用,实际值由回退链决定)
provider: TargetProvider 实例(新模式)
batch_size: 批量读取大小
Returns:
dict: {'output_file': str, 'total_count': int, 'source': str}
"""
# 参数验证:至少提供一个
if target_id is None and provider is None:
raise ValueError("必须提供 target_id 或 provider 参数之一")
# Provider 模式:使用 TargetProvider 导出
if provider is not None:
logger.info("使用 Provider 模式 - Provider: %s", type(provider).__name__)
return _export_with_provider(output_file, provider)
# 传统模式:使用 export_urls_with_fallback
logger.info("使用传统模式 - Target ID: %d", target_id)
result = export_urls_with_fallback(
target_id=target_id,
output_file=output_file,
sources=[DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"指纹识别 URL 导出完成 - source=%s, count=%d",
result['source'], result['total_count']
)
# 返回实际使用的数据源(不再固定为 "website"
return {
'output_file': result['output_file'],
'total_count': result['total_count'],
'source': result['source'],
}
def _export_with_provider(output_file: str, provider: TargetProvider) -> dict:
"""使用 Provider 导出 URL"""
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
total_count = 0
blacklist_filter = provider.get_blacklist_filter()
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url in provider.iter_urls():
# 应用黑名单过滤(如果有)
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
f.write(f"{url}\n")
total_count += 1
if total_count % 1000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
logger.info("✓ URL 导出完成 - 总数: %d, 文件: %s", total_count, str(output_path))
return {
'output_file': str(output_path),
'total_count': total_count,
'source': 'provider',
}

View File

@@ -1,99 +0,0 @@
"""
导出主机列表到 TXT 文件的 Task
支持两种模式:
1. 传统模式(向后兼容):使用 target_id 从数据库导出
2. Provider 模式:使用 TargetProvider 从任意数据源导出
根据 Target 类型决定导出内容:
- DOMAIN: 从 Subdomain 表导出子域名
- IP: 直接写入 target.name
- CIDR: 展开 CIDR 范围内的所有 IP
"""
import logging
from pathlib import Path
from typing import Optional
from prefect import task
from apps.scan.providers import DatabaseTargetProvider, TargetProvider
logger = logging.getLogger(__name__)
@task(name="export_hosts")
def export_hosts_task(
output_file: str,
target_id: Optional[int] = None,
provider: Optional[TargetProvider] = None,
) -> dict:
"""
导出主机列表到 TXT 文件
支持两种模式:
1. 传统模式(向后兼容):传入 target_id从数据库导出
2. Provider 模式:传入 provider从任意数据源导出
根据 Target 类型自动决定导出内容:
- DOMAIN: 从 Subdomain 表导出子域名(流式处理,支持 10万+ 域名)
- IP: 直接写入 target.name单个 IP
- CIDR: 展开 CIDR 范围内的所有可用 IP
Args:
output_file: 输出文件路径(绝对路径)
target_id: 目标 ID传统模式向后兼容
provider: TargetProvider 实例(新模式)
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int,
'target_type': str # 仅传统模式返回
}
Raises:
ValueError: 参数错误target_id 和 provider 都未提供)
IOError: 文件写入失败
"""
if target_id is None and provider is None:
raise ValueError("必须提供 target_id 或 provider 参数之一")
# 向后兼容:如果没有提供 provider使用 target_id 创建 DatabaseTargetProvider
use_legacy_mode = provider is None
if use_legacy_mode:
logger.info("使用传统模式 - Target ID: %d", target_id)
provider = DatabaseTargetProvider(target_id=target_id)
else:
logger.info("使用 Provider 模式 - Provider: %s", type(provider).__name__)
# 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 使用 Provider 导出主机列表iter_hosts 内部已处理黑名单过滤)
total_count = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for host in provider.iter_hosts():
f.write(f"{host}\n")
total_count += 1
if total_count % 1000 == 0:
logger.info("已导出 %d 个主机...", total_count)
logger.info("✓ 主机列表导出完成 - 总数: %d, 文件: %s", total_count, str(output_path))
result = {
'success': True,
'output_file': str(output_path),
'total_count': total_count,
}
# 传统模式:保持返回值格式不变(向后兼容)
if use_legacy_mode:
from apps.targets.services import TargetService
target = TargetService().get_target(target_id)
result['target_type'] = target.type if target else 'unknown'
return result

View File

@@ -1,208 +0,0 @@
"""
导出站点URL到文件的Task
支持两种模式:
1. 传统模式(向后兼容):使用 target_id 从数据库导出
2. Provider 模式:使用 TargetProvider 从任意数据源导出
特殊逻辑:
- 80 端口:只生成 HTTP URL省略端口号
- 443 端口:只生成 HTTPS URL省略端口号
- 其他端口:生成 HTTP 和 HTTPS 两个URL带端口号
"""
import logging
from typing import Optional
from pathlib import Path
from prefect import task
from apps.asset.services import HostPortMappingService
from apps.scan.services.target_export_service import create_export_service
from apps.common.services import BlacklistService
from apps.common.utils import BlacklistFilter
from apps.scan.providers import TargetProvider, DatabaseTargetProvider, ProviderContext
logger = logging.getLogger(__name__)
def _generate_urls_from_port(host: str, port: int) -> list[str]:
"""
根据端口生成 URL 列表
- 80 端口:只生成 HTTP URL省略端口号
- 443 端口:只生成 HTTPS URL省略端口号
- 其他端口:生成 HTTP 和 HTTPS 两个URL带端口号
"""
if port == 80:
return [f"http://{host}"]
elif port == 443:
return [f"https://{host}"]
else:
return [f"http://{host}:{port}", f"https://{host}:{port}"]
@task(name="export_site_urls")
def export_site_urls_task(
output_file: str,
target_id: Optional[int] = None,
provider: Optional[TargetProvider] = None,
batch_size: int = 1000
) -> dict:
"""
导出目标下的所有站点URL到文件
支持两种模式:
1. 传统模式(向后兼容):传入 target_id从 HostPortMapping 表导出
2. Provider 模式:传入 provider从任意数据源导出
传统模式特殊逻辑:
- 80 端口:只生成 HTTP URL省略端口号
- 443 端口:只生成 HTTPS URL省略端口号
- 其他端口:生成 HTTP 和 HTTPS 两个URL带端口号
回退逻辑(仅传统模式):
- 如果 HostPortMapping 为空,使用 generate_default_urls() 生成默认 URL
Args:
output_file: 输出文件路径(绝对路径)
target_id: 目标ID传统模式向后兼容
provider: TargetProvider 实例(新模式)
batch_size: 每次处理的批次大小
Returns:
dict: {
'success': bool,
'output_file': str,
'total_urls': int,
'association_count': int, # 主机端口关联数量(仅传统模式)
'source': str, # 数据来源: "host_port" | "default" | "provider"
}
Raises:
ValueError: 参数错误
IOError: 文件写入失败
"""
# 参数验证:至少提供一个
if target_id is None and provider is None:
raise ValueError("必须提供 target_id 或 provider 参数之一")
# 向后兼容:如果没有提供 provider使用传统模式
if provider is None:
logger.info("使用传统模式 - Target ID: %d, 输出文件: %s", target_id, output_file)
return _export_site_urls_legacy(target_id, output_file, batch_size)
# Provider 模式
logger.info("使用 Provider 模式 - Provider: %s, 输出文件: %s", type(provider).__name__, output_file)
# 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 使用 Provider 导出 URL 列表
total_urls = 0
blacklist_filter = provider.get_blacklist_filter()
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url in provider.iter_urls():
# 应用黑名单过滤(如果有)
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
f.write(f"{url}\n")
total_urls += 1
if total_urls % 1000 == 0:
logger.info("已导出 %d 个URL...", total_urls)
logger.info("✓ URL导出完成 - 总数: %d, 文件: %s", total_urls, str(output_path))
return {
'success': True,
'output_file': str(output_path),
'total_urls': total_urls,
'source': 'provider',
}
def _export_site_urls_legacy(target_id: int, output_file: str, batch_size: int) -> dict:
"""
传统模式:从 HostPortMapping 表导出 URL
保持原有逻辑不变,确保向后兼容
"""
logger.info("开始统计站点URL - Target ID: %d, 输出文件: %s", target_id, output_file)
# 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 获取规则并创建过滤器
blacklist_filter = BlacklistFilter(BlacklistService().get_rules(target_id))
# 直接查询 HostPortMapping 表,按 host 排序
service = HostPortMappingService()
associations = service.iter_host_port_by_target(
target_id=target_id,
batch_size=batch_size,
)
total_urls = 0
association_count = 0
filtered_count = 0
# 流式写入文件(特殊端口逻辑)
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for assoc in associations:
association_count += 1
host = assoc['host']
port = assoc['port']
# 先校验 host通过了再生成 URL
if not blacklist_filter.is_allowed(host):
filtered_count += 1
continue
# 根据端口号生成URL
for url in _generate_urls_from_port(host, port):
f.write(f"{url}\n")
total_urls += 1
if association_count % 1000 == 0:
logger.info("已处理 %d 条关联,生成 %d 个URL...", association_count, total_urls)
if filtered_count > 0:
logger.info("黑名单过滤: 过滤 %d 条关联", filtered_count)
logger.info(
"✓ 站点URL导出完成 - 关联数: %d, 总URL数: %d, 文件: %s",
association_count, total_urls, str(output_path)
)
# 判断数据来源
source = "host_port"
# 数据存在但全被过滤,不回退
if association_count > 0 and total_urls == 0:
logger.info("HostPortMapping 有 %d 条数据,但全被黑名单过滤,不回退", association_count)
return {
'success': True,
'output_file': str(output_path),
'total_urls': 0,
'association_count': association_count,
'source': source,
}
# 数据源为空,回退到默认 URL 生成
if total_urls == 0:
logger.info("HostPortMapping 为空,使用默认 URL 生成")
export_service = create_export_service(target_id)
result = export_service.generate_default_urls(target_id, str(output_path))
total_urls = result['total_count']
source = "default"
return {
'success': True,
'output_file': str(output_path),
'total_urls': total_urls,
'association_count': association_count,
'source': source,
}

View File

@@ -1,120 +0,0 @@
"""
导出站点 URL 列表任务
支持两种模式:
1. 传统模式(向后兼容):使用 target_id 从数据库导出
2. Provider 模式:使用 TargetProvider 从任意数据源导出
数据源: WebSite.url → Default用于 katana 等爬虫工具)
"""
import logging
from typing import Optional
from pathlib import Path
from prefect import task
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
from apps.scan.providers import TargetProvider, DatabaseTargetProvider
logger = logging.getLogger(__name__)
@task(
name='export_sites_for_url_fetch',
retries=1,
log_prints=True
)
def export_sites_task(
output_file: str,
target_id: Optional[int] = None,
scan_id: Optional[int] = None,
provider: Optional[TargetProvider] = None,
batch_size: int = 1000
) -> dict:
"""
导出站点 URL 列表到文件(用于 katana 等爬虫工具)
支持两种模式:
1. 传统模式(向后兼容):传入 target_id从数据库导出
2. Provider 模式:传入 provider从任意数据源导出
数据源优先级(回退链,仅传统模式):
1. WebSite 表 - 站点级别 URL
2. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
output_file: 输出文件路径
target_id: 目标 ID传统模式向后兼容
scan_id: 扫描 ID保留参数兼容旧调用
provider: TargetProvider 实例(新模式)
batch_size: 批次大小(内存优化)
Returns:
dict: {
'output_file': str, # 输出文件路径
'asset_count': int, # 资产数量
}
Raises:
ValueError: 参数错误
RuntimeError: 执行失败
"""
# 参数验证:至少提供一个
if target_id is None and provider is None:
raise ValueError("必须提供 target_id 或 provider 参数之一")
# Provider 模式:使用 TargetProvider 导出
if provider is not None:
logger.info("使用 Provider 模式 - Provider: %s", type(provider).__name__)
return _export_with_provider(output_file, provider)
# 传统模式:使用 export_urls_with_fallback
logger.info("使用传统模式 - Target ID: %d", target_id)
result = export_urls_with_fallback(
target_id=target_id,
output_file=output_file,
sources=[DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"站点 URL 导出完成 - source=%s, count=%d",
result['source'], result['total_count']
)
# 保持返回值格式不变(向后兼容)
return {
'output_file': result['output_file'],
'asset_count': result['total_count'],
}
def _export_with_provider(output_file: str, provider: TargetProvider) -> dict:
"""使用 Provider 导出 URL"""
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
total_count = 0
blacklist_filter = provider.get_blacklist_filter()
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url in provider.iter_urls():
# 应用黑名单过滤(如果有)
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
f.write(f"{url}\n")
total_count += 1
if total_count % 1000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
logger.info("✓ URL 导出完成 - 总数: %d, 文件: %s", total_count, str(output_path))
return {
'output_file': str(output_path),
'asset_count': total_count,
}

View File

@@ -1,118 +0,0 @@
"""导出 Endpoint URL 到文件的 Task
支持两种模式:
1. 传统模式(向后兼容):使用 target_id 从数据库导出
2. Provider 模式:使用 TargetProvider 从任意数据源导出
数据源优先级(回退链,仅传统模式):
1. Endpoint.url - 最精细的 URL含路径、参数等
2. WebSite.url - 站点级别 URL
3. 默认生成 - 根据 Target 类型生成 http(s)://target_name
"""
import logging
from typing import Dict, Optional
from pathlib import Path
from prefect import task
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
from apps.scan.providers import TargetProvider, DatabaseTargetProvider
logger = logging.getLogger(__name__)
@task(name="export_endpoints")
def export_endpoints_task(
target_id: Optional[int] = None,
output_file: str = "",
provider: Optional[TargetProvider] = None,
batch_size: int = 1000,
) -> Dict[str, object]:
"""导出目标下的所有 Endpoint URL 到文本文件。
支持两种模式:
1. 传统模式(向后兼容):传入 target_id从数据库导出
2. Provider 模式:传入 provider从任意数据源导出
数据源优先级(回退链,仅传统模式):
1. Endpoint 表 - 最精细的 URL含路径、参数等
2. WebSite 表 - 站点级别 URL
3. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
target_id: 目标 ID传统模式向后兼容
output_file: 输出文件路径(绝对路径)
provider: TargetProvider 实例(新模式)
batch_size: 每次从数据库迭代的批大小
Returns:
dict: {
"success": bool,
"output_file": str,
"total_count": int,
"source": str, # 数据来源: "endpoint" | "website" | "default" | "none" | "provider"
}
"""
# 参数验证:至少提供一个
if target_id is None and provider is None:
raise ValueError("必须提供 target_id 或 provider 参数之一")
# Provider 模式:使用 TargetProvider 导出
if provider is not None:
logger.info("使用 Provider 模式 - Provider: %s", type(provider).__name__)
return _export_with_provider(output_file, provider)
# 传统模式:使用 export_urls_with_fallback
logger.info("使用传统模式 - Target ID: %d", target_id)
result = export_urls_with_fallback(
target_id=target_id,
output_file=output_file,
sources=[DataSource.ENDPOINT, DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"URL 导出完成 - source=%s, count=%d, tried=%s",
result['source'], result['total_count'], result['tried_sources']
)
return {
"success": result['success'],
"output_file": result['output_file'],
"total_count": result['total_count'],
"source": result['source'],
}
def _export_with_provider(output_file: str, provider: TargetProvider) -> Dict[str, object]:
"""使用 Provider 导出 URL"""
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
total_count = 0
blacklist_filter = provider.get_blacklist_filter()
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url in provider.iter_urls():
# 应用黑名单过滤(如果有)
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
f.write(f"{url}\n")
total_count += 1
if total_count % 1000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
logger.info("✓ URL 导出完成 - 总数: %d, 文件: %s", total_count, str(output_path))
return {
"success": True,
"output_file": str(output_path),
"total_count": total_count,
"source": "provider",
}

View File

@@ -1,497 +0,0 @@
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.exceptions import NotFound, APIException
from rest_framework.filters import SearchFilter
from django_filters.rest_framework import DjangoFilterBackend
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.db.utils import DatabaseError, IntegrityError, OperationalError
import logging
from apps.common.response_helpers import success_response, error_response
from apps.common.error_codes import ErrorCodes
from apps.scan.utils.config_merger import ConfigConflictError
logger = logging.getLogger(__name__)
from ..models import Scan, ScheduledScan
from ..serializers import (
ScanSerializer, ScanHistorySerializer, QuickScanSerializer,
InitiateScanSerializer, ScheduledScanSerializer, CreateScheduledScanSerializer,
UpdateScheduledScanSerializer, ToggleScheduledScanSerializer
)
from ..services.scan_service import ScanService
from ..services.scheduled_scan_service import ScheduledScanService
from ..repositories import ScheduledScanDTO
from apps.targets.services.target_service import TargetService
from apps.targets.services.organization_service import OrganizationService
from apps.engine.services.engine_service import EngineService
from apps.common.definitions import ScanStatus
from apps.common.pagination import BasePagination
class ScanViewSet(viewsets.ModelViewSet):
"""扫描任务视图集"""
serializer_class = ScanSerializer
pagination_class = BasePagination
filter_backends = [DjangoFilterBackend, SearchFilter]
filterset_fields = ['target'] # 支持 ?target=123 过滤
search_fields = ['target__name'] # 按目标名称搜索
def get_queryset(self):
"""优化查询集提升API性能
查询优化策略:
- select_related: 预加载 target 和 engine一对一/多对一关系,使用 JOIN
- 移除 prefetch_related: 避免加载大量资产数据到内存
- order_by: 按创建时间降序排列(最新创建的任务排在最前面)
性能优化原理:
- 列表页使用缓存统计字段cached_*_count避免实时 COUNT 查询
- 序列化器:严格验证缓存字段,确保数据一致性
- 分页场景每页只显示10条记录查询高效
- 避免大数据加载:不再预加载所有关联的资产数据
"""
# 只保留必要的 select_related移除所有 prefetch_related
scan_service = ScanService()
queryset = scan_service.get_all_scans(prefetch_relations=True)
return queryset
def get_serializer_class(self):
"""根据不同的 action 返回不同的序列化器
- list action: 使用 ScanHistorySerializer包含 summary 和 progress
- retrieve action: 使用 ScanHistorySerializer包含 summary 和 progress
- 其他 action: 使用标准的 ScanSerializer
"""
if self.action in ['list', 'retrieve']:
return ScanHistorySerializer
return ScanSerializer
def destroy(self, request, *args, **kwargs):
"""
删除单个扫描任务(两阶段删除)
1. 软删除:立即对用户不可见
2. 硬删除:后台异步执行
"""
try:
scan = self.get_object()
scan_service = ScanService()
result = scan_service.delete_scans_two_phase([scan.id])
return success_response(
data={
'scanId': scan.id,
'deletedCount': result['soft_deleted_count'],
'deletedScans': result['scan_names']
}
)
except Scan.DoesNotExist:
return error_response(
code=ErrorCodes.NOT_FOUND,
status_code=status.HTTP_404_NOT_FOUND
)
except ValueError as e:
return error_response(
code=ErrorCodes.NOT_FOUND,
message=str(e),
status_code=status.HTTP_404_NOT_FOUND
)
except Exception as e:
logger.exception("删除扫描任务时发生错误")
return error_response(
code=ErrorCodes.SERVER_ERROR,
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@action(detail=False, methods=['post'])
def quick(self, request):
"""
快速扫描接口
功能:
1. 接收目标列表和 YAML 配置
2. 自动解析输入(支持 URL、域名、IP、CIDR
3. 批量创建 Target、Website、Endpoint 资产
4. 立即发起批量扫描
请求参数:
{
"targets": [{"name": "example.com"}, {"name": "https://example.com/api"}],
"configuration": "subdomain_discovery:\n enabled: true\n ...",
"engine_ids": [1, 2], // 可选,用于记录
"engine_names": ["引擎A", "引擎B"] // 可选,用于记录
}
支持的输入格式:
- 域名: example.com
- IP: 192.168.1.1
- CIDR: 10.0.0.0/8
- URL: https://example.com/api/v1
"""
from ..services.quick_scan_service import QuickScanService
serializer = QuickScanSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
targets_data = serializer.validated_data['targets']
configuration = serializer.validated_data['configuration']
engine_ids = serializer.validated_data.get('engine_ids', [])
engine_names = serializer.validated_data.get('engine_names', [])
try:
# 提取输入字符串列表
inputs = [t['name'] for t in targets_data]
# 1. 使用 QuickScanService 解析输入并创建资产
quick_scan_service = QuickScanService()
result = quick_scan_service.process_quick_scan(inputs, engine_ids[0] if engine_ids else None)
targets = result['targets']
if not targets:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='No valid targets for scanning',
details=result.get('errors', []),
status_code=status.HTTP_400_BAD_REQUEST
)
# 2. 直接使用前端传递的配置创建扫描
scan_service = ScanService()
created_scans = scan_service.create_scans(
targets=targets,
engine_ids=engine_ids,
engine_names=engine_names,
yaml_configuration=configuration
)
# 检查是否成功创建扫描任务
if not created_scans:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='No scan tasks were created. All targets may already have active scans.',
details={
'targetStats': result['target_stats'],
'assetStats': result['asset_stats'],
'errors': result.get('errors', [])
},
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY
)
# 序列化返回结果
scan_serializer = ScanSerializer(created_scans, many=True)
return success_response(
data={
'count': len(created_scans),
'targetStats': result['target_stats'],
'assetStats': result['asset_stats'],
'errors': result.get('errors', []),
'scans': scan_serializer.data
},
status_code=status.HTTP_201_CREATED
)
except ValidationError as e:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message=str(e),
status_code=status.HTTP_400_BAD_REQUEST
)
except Exception as e:
logger.exception("快速扫描启动失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@action(detail=False, methods=['post'])
def initiate(self, request):
"""
发起扫描任务
请求参数:
- organization_id: 组织ID (int, 可选)
- target_id: 目标ID (int, 可选)
- configuration: YAML 配置字符串 (str, 必填)
- engine_ids: 扫描引擎ID列表 (list[int], 必填)
- engine_names: 引擎名称列表 (list[str], 必填)
注意: organization_id 和 target_id 二选一
返回:
- 扫描任务详情(单个或多个)
"""
# 使用 serializer 验证请求数据
serializer = InitiateScanSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
# 获取验证后的数据
organization_id = serializer.validated_data.get('organization_id')
target_id = serializer.validated_data.get('target_id')
configuration = serializer.validated_data['configuration']
engine_ids = serializer.validated_data['engine_ids']
engine_names = serializer.validated_data['engine_names']
try:
# 获取目标列表
scan_service = ScanService()
if organization_id:
from apps.targets.repositories import DjangoOrganizationRepository
org_repo = DjangoOrganizationRepository()
organization = org_repo.get_by_id(organization_id)
if not organization:
raise ObjectDoesNotExist(f'Organization ID {organization_id} 不存在')
targets = org_repo.get_targets(organization_id)
if not targets:
raise ValidationError(f'组织 ID {organization_id} 下没有目标')
else:
from apps.targets.repositories import DjangoTargetRepository
target_repo = DjangoTargetRepository()
target = target_repo.get_by_id(target_id)
if not target:
raise ObjectDoesNotExist(f'Target ID {target_id} 不存在')
targets = [target]
# 直接使用前端传递的配置创建扫描
created_scans = scan_service.create_scans(
targets=targets,
engine_ids=engine_ids,
engine_names=engine_names,
yaml_configuration=configuration
)
# 检查是否成功创建扫描任务
if not created_scans:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='No scan tasks were created. All targets may already have active scans.',
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY
)
# 序列化返回结果
scan_serializer = ScanSerializer(created_scans, many=True)
return success_response(
data={
'count': len(created_scans),
'scans': scan_serializer.data
},
status_code=status.HTTP_201_CREATED
)
except ObjectDoesNotExist as e:
# 资源不存在错误(由 service 层抛出)
return error_response(
code=ErrorCodes.NOT_FOUND,
message=str(e),
status_code=status.HTTP_404_NOT_FOUND
)
except ValidationError as e:
# 参数验证错误(由 service 层抛出)
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message=str(e),
status_code=status.HTTP_400_BAD_REQUEST
)
except (DatabaseError, IntegrityError, OperationalError):
# 数据库错误
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Database error',
status_code=status.HTTP_503_SERVICE_UNAVAILABLE
)
# 所有快照相关的 action 和 export 已迁移到 asset/views.py 中的快照 ViewSet
# GET /api/scans/{id}/subdomains/ -> SubdomainSnapshotViewSet
# GET /api/scans/{id}/subdomains/export/ -> SubdomainSnapshotViewSet.export
# GET /api/scans/{id}/websites/ -> WebsiteSnapshotViewSet
# GET /api/scans/{id}/websites/export/ -> WebsiteSnapshotViewSet.export
# GET /api/scans/{id}/directories/ -> DirectorySnapshotViewSet
# GET /api/scans/{id}/directories/export/ -> DirectorySnapshotViewSet.export
# GET /api/scans/{id}/endpoints/ -> EndpointSnapshotViewSet
# GET /api/scans/{id}/endpoints/export/ -> EndpointSnapshotViewSet.export
# GET /api/scans/{id}/ip-addresses/ -> HostPortMappingSnapshotViewSet
# GET /api/scans/{id}/ip-addresses/export/ -> HostPortMappingSnapshotViewSet.export
# GET /api/scans/{id}/vulnerabilities/ -> VulnerabilitySnapshotViewSet
@action(detail=False, methods=['post', 'delete'], url_path='bulk-delete')
def bulk_delete(self, request):
"""
批量删除扫描记录
请求参数:
- ids: 扫描ID列表 (list[int], 必填)
示例请求:
POST /api/scans/bulk-delete/
{
"ids": [1, 2, 3]
}
返回:
- message: 成功消息
- deletedCount: 实际删除的记录数
注意:
- 使用级联删除,会同时删除关联的子域名、端点等数据
- 只删除存在的记录不存在的ID会被忽略
"""
ids = request.data.get('ids', [])
# 参数验证
if not ids:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='Missing required parameter: ids',
status_code=status.HTTP_400_BAD_REQUEST
)
if not isinstance(ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ids must be an array',
status_code=status.HTTP_400_BAD_REQUEST
)
if not all(isinstance(i, int) for i in ids):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='All elements in ids array must be integers',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
# 使用 Service 层批量删除(两阶段删除)
scan_service = ScanService()
result = scan_service.delete_scans_two_phase(ids)
return success_response(
data={
'deletedCount': result['soft_deleted_count'],
'deletedScans': result['scan_names']
}
)
except ValueError as e:
# 未找到记录
return error_response(
code=ErrorCodes.NOT_FOUND,
message=str(e),
status_code=status.HTTP_404_NOT_FOUND
)
except Exception as e:
logger.exception("批量删除扫描任务时发生错误")
return error_response(
code=ErrorCodes.SERVER_ERROR,
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@action(detail=False, methods=['get'])
def statistics(self, request):
"""
获取扫描统计数据
返回扫描任务的汇总统计信息,用于仪表板和扫描历史页面。
使用缓存字段聚合查询,性能优异。
返回:
- total: 总扫描次数
- running: 运行中的扫描数量
- completed: 已完成的扫描数量
- failed: 失败的扫描数量
- totalVulns: 总共发现的漏洞数量
- totalSubdomains: 总共发现的子域名数量
- totalEndpoints: 总共发现的端点数量
- totalAssets: 总资产数
"""
try:
# 使用 Service 层获取统计数据
scan_service = ScanService()
stats = scan_service.get_statistics()
return success_response(
data={
'total': stats['total'],
'running': stats['running'],
'completed': stats['completed'],
'failed': stats['failed'],
'totalVulns': stats['total_vulns'],
'totalSubdomains': stats['total_subdomains'],
'totalEndpoints': stats['total_endpoints'],
'totalWebsites': stats['total_websites'],
'totalAssets': stats['total_assets'],
}
)
except (DatabaseError, OperationalError):
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Database error',
status_code=status.HTTP_503_SERVICE_UNAVAILABLE
)
@action(detail=True, methods=['post'])
def stop(self, request, pk=None): # pylint: disable=unused-argument
"""
停止扫描任务
URL: POST /api/scans/{id}/stop/
功能:
- 终止正在运行或初始化的扫描任务
- 更新扫描状态为 CANCELLED
状态限制:
- 只能停止 RUNNING 或 INITIATED 状态的扫描
- 已完成、失败或取消的扫描无法停止
返回:
- message: 成功消息
- revokedTaskCount: 取消的 Flow Run 数量
"""
try:
# 使用 Service 层处理停止逻辑
scan_service = ScanService()
success, revoked_count = scan_service.stop_scan(scan_id=pk)
if not success:
# 检查是否是状态不允许的问题
scan = scan_service.get_scan(scan_id=pk, prefetch_relations=False)
if scan and scan.status not in [ScanStatus.RUNNING, ScanStatus.INITIATED]:
return error_response(
code=ErrorCodes.BAD_REQUEST,
message=f'Cannot stop scan: current status is {ScanStatus(scan.status).label}',
status_code=status.HTTP_400_BAD_REQUEST
)
# 其他失败原因
return error_response(
code=ErrorCodes.SERVER_ERROR,
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
return success_response(
data={'revokedTaskCount': revoked_count}
)
except ObjectDoesNotExist:
return error_response(
code=ErrorCodes.NOT_FOUND,
message=f'Scan ID {pk} not found',
status_code=status.HTTP_404_NOT_FOUND
)
except (DatabaseError, IntegrityError, OperationalError):
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Database error',
status_code=status.HTTP_503_SERVICE_UNAVAILABLE
)

View File

@@ -1,65 +1,38 @@
# ==================== 数据库配置PostgreSQL ====================
# DB_HOST 决定使用本地容器还是远程数据库:
# - postgres / localhost / 127.0.0.1 → 启动本地 PostgreSQL 容器
# - 其他地址(如 192.168.1.100 → 使用远程数据库,不启动本地容器
# ============================================
# Docker Image Configuration
# ============================================
IMAGE_TAG=dev
# ============================================
# Required: Security Configuration
# MUST change these in production!
# ============================================
JWT_SECRET=change-me-in-production-use-a-long-random-string
WORKER_TOKEN=change-me-worker-token
# ============================================
# Required: Docker Service Hosts
# ============================================
DB_HOST=postgres
DB_PORT=5432
DB_NAME=xingrin
DB_USER=postgres
DB_PASSWORD=123.com
# ==================== Redis 配置 ====================
# Redis 仅在 Docker 内部网络使用,不暴露公网端口
DB_PASSWORD=postgres
REDIS_HOST=redis
REDIS_DB=0
# ==================== 服务端口配置 ====================
# SERVER_PORT 为 Django / uvicorn 容器内部端口(由 nginx 反代,对公网不直接暴露)
SERVER_PORT=8888
# ============================================
# Optional: Override defaults if needed
# ============================================
# PUBLIC_URL=https://your-domain.com:8083
# SERVER_PORT=8080
# GIN_MODE=release
# DB_PORT=5432
# DB_USER=postgres
# DB_NAME=lunafox
# DB_SSLMODE=disable
# DB_MAX_OPEN_CONNS=50
# DB_MAX_IDLE_CONNS=10
# REDIS_PORT=6379
# REDIS_PASSWORD=
# LOG_LEVEL=info
# LOG_FORMAT=json
# WORDLISTS_BASE_PATH=/opt/lunafox/wordlists
# ==================== 远程 Worker 配置 ====================
# 供远程 Worker 访问主服务器的地址:
# - 仅本地部署serverDocker 内部服务名)
# - 有远程 Worker改为主服务器外网 IP 或域名(如 192.168.1.100 或 xingrin.example.com
# 注意:远程 Worker 会通过 https://{PUBLIC_HOST}:{PUBLIC_PORT} 访问nginx 反代到后端 8888
PUBLIC_HOST=server
# 对外 HTTPS 端口
PUBLIC_PORT=8083
# ==================== Django 核心配置 ====================
# 生产环境务必更换为随机强密钥
DJANGO_SECRET_KEY=django-insecure-change-me-in-production
# 是否开启调试模式(生产环境请保持 False
DEBUG=False
# 允许的前端来源地址(用于 CORS
CORS_ALLOWED_ORIGINS=http://localhost:3000
# ==================== 路径配置(容器内路径) ====================
# 扫描结果保存目录
SCAN_RESULTS_DIR=/opt/xingrin/results
# Django 日志目录
# 注意:如果留空或删除此变量,日志将只输出到 Docker 控制台(标准输出),不写入文件
LOG_DIR=/opt/xingrin/logs
# 扫描工具路径(容器内路径,符合 FHS 标准,已隔离避免命名冲突)
# 默认值已在 settings.py 中设置,无需修改,除非需要回退到旧路径
SCAN_TOOLS_PATH=/opt/xingrin-tools/bin
# ==================== 日志级别配置 ====================
# 应用日志级别DEBUG / INFO / WARNING / ERROR
LOG_LEVEL=INFO
# 是否记录命令执行日志(大量扫描时会增加磁盘占用)
ENABLE_COMMAND_LOGGING=true
# ==================== Worker API Key 配置 ====================
# Worker 节点认证密钥(用于 Worker 与主服务器之间的 API 认证)
# 生产环境务必更换为随机强密钥(建议 32 位以上随机字符串)
# 生成方法: openssl rand -hex 32
WORKER_API_KEY=change-me-to-a-secure-random-key
# ==================== Docker Hub 配置(生产模式) ====================
# 生产模式下从 Docker Hub 拉取镜像时使用
DOCKER_USER=yyhuni
# 镜像版本标签(安装时自动从 VERSION 文件读取)
# VERSION 文件由 CI 自动更新,与 Git Tag 保持一致
# 注意:此值由 install.sh 自动设置,请勿手动修改
IMAGE_TAG=__WILL_BE_SET_BY_INSTALLER__

View File

@@ -1,133 +1,122 @@
services:
# PostgreSQL可选使用远程数据库时不启动
# 本地模式: docker compose --profile local-db up -d
# 远程模式: docker compose up -d需配置 DB_HOST 为远程地址)
# 使用自定义镜像,预装 pg_ivm 扩展
# Agent 请通过安装脚本注册启动(/api/agents/install.sh
postgres:
profiles: ["local-db"]
build:
context: ./postgres
dockerfile: Dockerfile
image: ${DOCKER_USER:-yyhuni}/xingrin-postgres:${IMAGE_TAG:-dev}
restart: always
image: postgres:16.3-alpine
restart: "on-failure:3"
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME:-lunafox}
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:-postgres}
ports:
- "${DB_PORT:-5432}:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./postgres/init-user-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh
ports:
- "${DB_PORT}:5432"
command: >
postgres
-c shared_preload_libraries=pg_ivm
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-postgres}"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: always
image: redis:7.4.7-alpine
restart: "on-failure:3"
ports:
- "${REDIS_PORT:-6379}:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
test: [CMD, redis-cli, ping]
interval: 5s
timeout: 5s
retries: 5
server:
build:
context: ..
dockerfile: docker/server/Dockerfile
restart: always
image: golang:1.25.6
restart: "on-failure:3"
env_file:
- .env
environment:
- IMAGE_TAG=${IMAGE_TAG:-dev}
- PUBLIC_URL=${PUBLIC_URL:-}
- GOMODCACHE=/go/pkg/mod
- GOCACHE=/root/.cache/go-build
- GO111MODULE=${GO111MODULE:-on}
- GOPROXY=${GOPROXY:-https://goproxy.cn,direct}
ports:
- "8888:8888"
- "8080:8080"
working_dir: /workspace/server
command: sh -c "go install github.com/air-verse/air@latest && air -c .air.toml"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
volumes:
# 统一挂载数据目录
- /opt/xingrin:/opt/xingrin
- /opt/lunafox:/opt/lunafox
- /var/run/docker.sock:/var/run/docker.sock
# OOM 优先级:-500 保护核心服务
oom_score_adj: -500
healthcheck:
# 使用专门的健康检查端点(无需认证)
test: ["CMD", "curl", "-f", "http://localhost:8888/api/health/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
# Agent心跳上报 + 负载监控 + 版本检查
agent:
build:
context: ..
dockerfile: docker/agent/Dockerfile
args:
IMAGE_TAG: ${IMAGE_TAG:-dev}
restart: always
environment:
- SERVER_URL=http://server:8888
- WORKER_NAME=Local-Worker
- IS_LOCAL=true
- IMAGE_TAG=${IMAGE_TAG:-dev}
- WORKER_API_KEY=${WORKER_API_KEY}
depends_on:
server:
condition: service_healthy
volumes:
- /proc:/host/proc:ro
- ../server:/workspace/server
- go-mod-cache:/go/pkg/mod
- go-build-cache:/root/.cache/go-build
frontend:
build:
context: ..
dockerfile: docker/frontend/Dockerfile
args:
IMAGE_TAG: ${IMAGE_TAG:-dev}
restart: always
# OOM 优先级:-500 保护 Web 界面
oom_score_adj: -500
image: node:20.20.0-alpine
restart: "on-failure:3"
environment:
- NODE_ENV=development
- API_HOST=server
- NEXT_PUBLIC_BACKEND_URL=${NEXT_PUBLIC_BACKEND_URL:-}
- PORT=3000
- HOSTNAME=0.0.0.0
ports:
- "3000:3000"
working_dir: /app
command: sh -c "corepack enable && corepack prepare pnpm@latest --activate && if [ ! -d node_modules/.pnpm ]; then pnpm install; fi && pnpm dev"
depends_on:
server:
condition: service_healthy
condition: service_started
volumes:
- ../frontend:/app
- frontend_node_modules:/app/node_modules
- frontend_pnpm_store:/root/.local/share/pnpm/store
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000',res=>process.exit(res.statusCode<500?0:1)).on('error',()=>process.exit(1))"]
interval: 5s
timeout: 5s
retries: 20
start_period: 20s
nginx:
build:
context: ..
dockerfile: docker/nginx/Dockerfile
restart: always
# OOM 优先级:-500 保护入口网关
oom_score_adj: -500
image: yyhuni/lunafox-nginx:${IMAGE_TAG:-dev}
restart: "on-failure:3"
depends_on:
server:
condition: service_healthy
frontend:
condition: service_started
frontend:
condition: service_healthy
ports:
- "8083:8083"
volumes:
# SSL 证书挂载(方便更新)
- ./nginx/ssl:/etc/nginx/ssl:ro
# Worker:扫描任务执行容器(开发模式下构建)
# Worker: build image for task execution (not run in dev by default).
worker:
build:
context: ..
dockerfile: docker/worker/Dockerfile
image: docker-worker:${IMAGE_TAG:-latest}-dev
context: ../worker
dockerfile: Dockerfile
image: yyhuni/lunafox-worker:${IMAGE_TAG:-dev}
restart: "no"
volumes:
- /opt/xingrin:/opt/xingrin
- /opt/lunafox:/opt/lunafox
command: echo "Worker image built for development"
volumes:
postgres_data:
go-mod-cache:
go-build-cache:
frontend_node_modules:
frontend_pnpm_store:
networks:
default:
name: xingrin_network # 固定网络名,不随目录名变化
name: lunafox_network # Fixed network name, independent of directory name

View File

@@ -1,20 +1,12 @@
# ============================================
# 生产环境配置 - 使用 Docker Hub 预构建镜像
# ============================================
# 用法: docker compose up -d
#
# 开发环境请使用: docker compose -f docker-compose.dev.yml up -d
# ============================================
services:
# PostgreSQL可选使用远程数据库时不启动
# 使用自定义镜像,预装 pg_ivm 扩展
postgres:
profiles: ["local-db"]
build:
context: ./postgres
dockerfile: Dockerfile
image: ${DOCKER_USER:-yyhuni}/xingrin-postgres:${IMAGE_TAG:?IMAGE_TAG is required}
image: postgres:16.3-alpine
restart: always
environment:
POSTGRES_DB: ${DB_NAME}
@@ -22,12 +14,8 @@ services:
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./postgres/init-user-db.sh:/docker-entrypoint-initdb.d/init-user-db.sh
ports:
- "${DB_PORT}:5432"
command: >
postgres
-c shared_preload_libraries=pg_ivm
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 5s
@@ -35,7 +23,7 @@ services:
retries: 5
redis:
image: redis:7-alpine
image: redis:7.4.7-alpine
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
@@ -44,68 +32,47 @@ services:
retries: 5
server:
image: ${DOCKER_USER:-yyhuni}/xingrin-server:${IMAGE_TAG:?IMAGE_TAG is required}
image: yyhuni/lunafox-server:${IMAGE_TAG:?IMAGE_TAG is required}
restart: always
env_file:
- .env
environment:
- IMAGE_TAG=${IMAGE_TAG}
- PUBLIC_URL=${PUBLIC_URL:-}
depends_on:
redis:
condition: service_healthy
volumes:
# 统一挂载数据目录
- /opt/xingrin:/opt/xingrin
# Docker Socket 挂载:允许 Django 服务器执行本地 docker 命令(用于本地 Worker 任务分发)
- /opt/lunafox:/opt/lunafox
- /var/run/docker.sock:/var/run/docker.sock
# OOM 优先级:-500 降低被 OOM Killer 选中的概率,保护核心服务
oom_score_adj: -500
healthcheck:
# 使用专门的健康检查端点(无需认证)
test: ["CMD", "curl", "-f", "http://localhost:8888/api/health/"]
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
# ============================================
# Agent轻量心跳上报 + 负载监控(~10MB
# 扫描任务通过 task_distributor 分发到动态容器
# ============================================
agent:
image: ${DOCKER_USER:-yyhuni}/xingrin-agent:${IMAGE_TAG:?IMAGE_TAG is required}
container_name: xingrin-agent
restart: always
environment:
- SERVER_URL=http://server:8888
- WORKER_NAME=Local-Worker
- IS_LOCAL=true
- IMAGE_TAG=${IMAGE_TAG}
- WORKER_API_KEY=${WORKER_API_KEY}
depends_on:
server:
condition: service_healthy
volumes:
- /proc:/host/proc:ro
frontend:
image: ${DOCKER_USER:-yyhuni}/xingrin-frontend:${IMAGE_TAG:?IMAGE_TAG is required}
image: yyhuni/lunafox-frontend:${IMAGE_TAG:?IMAGE_TAG is required}
restart: always
# OOM 优先级:-500 保护 Web 界面
oom_score_adj: -500
depends_on:
server:
condition: service_healthy
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000',res=>process.exit(res.statusCode<500?0:1)).on('error',()=>process.exit(1))"]
interval: 5s
timeout: 5s
retries: 20
start_period: 20s
nginx:
image: ${DOCKER_USER:-yyhuni}/xingrin-nginx:${IMAGE_TAG:?IMAGE_TAG is required}
image: yyhuni/lunafox-nginx:${IMAGE_TAG:?IMAGE_TAG is required}
restart: always
# OOM 优先级:-500 保护入口网关
oom_score_adj: -500
depends_on:
server:
condition: service_healthy
frontend:
condition: service_started
condition: service_healthy
ports:
- "8083:8083"
volumes:
@@ -116,4 +83,4 @@ volumes:
networks:
default:
name: xingrin_network # 固定网络名,不随目录名变化
name: lunafox_network # 固定网络名,不随目录名变化

View File

@@ -1,4 +1,4 @@
FROM nginx:1.27-alpine
FROM nginx:1.28.1-alpine
# 复制 nginx 配置和证书
COPY docker/nginx/nginx.conf /etc/nginx/nginx.conf

View File

@@ -9,7 +9,7 @@ http {
# 上游服务
upstream backend {
server server:8888;
server server:8080;
}
upstream frontend {
@@ -31,20 +31,11 @@ http {
# HTTP 请求到 HTTPS 端口时自动跳转
error_page 497 =301 https://$host:$server_port$request_uri;
# 指纹特征 - 用于 FOFA/Shodan 等搜索引擎识别
add_header X-Powered-By "Xingrin ASM" always;
# 指纹特征
add_header X-Powered-By "LunaFox ASM" always;
location /api/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300s; # 5分钟支持大数据量导出
proxy_send_timeout 300s;
proxy_pass http://backend;
}
# WebSocket 反代
location /ws/ {
# Agent WebSocket
location /api/agents/ws {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
@@ -52,9 +43,52 @@ http {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_read_timeout 86400; # 24小时防止 WebSocket 超时
}
# 健康检查
location /health {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_pass http://backend;
}
location /api/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_read_timeout 300s; # 5分钟支持大数据量导出
proxy_send_timeout 300s;
proxy_pass http://backend;
}
# Next.js HMR (dev)
location /_next/webpack-hmr {
proxy_pass http://frontend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_read_timeout 86400;
}
# 前端反代
location / {
proxy_set_header Host $host;

View File

@@ -0,0 +1,574 @@
# Redis Stream 队列方案设计文档
## 概述
本文档描述了使用 Redis Stream 作为消息队列来优化大规模数据写入的方案设计。
## 背景
### 当前问题
在扫描大量 Endpoint 数据(几十万条)时,当前的 HTTP 批量写入方案存在以下问题:
1. **性能瓶颈**50 万 Endpoint每个 15 KB需要 83-166 分钟
2. **数据库 I/O 压力**20 个 Worker 同时写入导致数据库 I/O 满载
3. **Worker 阻塞风险**:如果使用批量写入 + 背压机制Worker 会阻塞等待
### 方案目标
- 性能提升 10 倍83 分钟 → 8 分钟)
- Worker 永不阻塞(扫描速度稳定)
- 数据不丢失(持久化保证)
- 无需部署新组件(利用现有 Redis
## 架构设计
### 整体架构
```
Worker 扫描 → Redis Stream → Server 消费 → PostgreSQL
```
### 数据流
1. **Worker 端**:扫描到 Endpoint → 发布到 Redis Stream
2. **Redis Stream**:缓冲消息(持久化到磁盘)
3. **Server 端**:单线程消费 → 批量写入数据库
### 关键特性
- **解耦**Worker 和数据库完全解耦
- **背压**Server 控制消费速度,保护数据库
- **持久化**Redis AOF 保证数据不丢失
- **扩展性**:支持多 Worker 并发写入
## Redis Stream 配置
### 启用 AOF 持久化
```conf
# redis.conf
appendonly yes
appendfsync everysec # 每秒同步一次(平衡性能和安全)
```
**效果**
- 数据持久化到磁盘
- Redis 崩溃最多丢失 1 秒数据
- 性能影响小
### 内存配置
```conf
# redis.conf
maxmemory 2gb
maxmemory-policy allkeys-lru # 内存不足时淘汰最少使用的 key
```
## 实现方案
### 1. Worker 端:发布到 Redis Stream
#### 代码结构
```
worker/internal/queue/
├── redis_publisher.go # Redis 发布者
└── types.go # 数据类型定义
```
#### 核心实现
```go
// worker/internal/queue/redis_publisher.go
package queue
import (
"context"
"encoding/json"
"fmt"
"github.com/redis/go-redis/v9"
)
type RedisPublisher struct {
client *redis.Client
}
func NewRedisPublisher(redisURL string) (*RedisPublisher, error) {
opt, err := redis.ParseURL(redisURL)
if err != nil {
return nil, err
}
client := redis.NewClient(opt)
// 测试连接
if err := client.Ping(context.Background()).Err(); err != nil {
return nil, err
}
return &RedisPublisher{client: client}, nil
}
// PublishEndpoint 发布 Endpoint 到 Redis Stream
func (p *RedisPublisher) PublishEndpoint(ctx context.Context, scanID int, endpoint Endpoint) error {
data, err := json.Marshal(endpoint)
if err != nil {
return err
}
streamName := fmt.Sprintf("endpoints:%d", scanID)
return p.client.XAdd(ctx, &redis.XAddArgs{
Stream: streamName,
MaxLen: 1000000, // 最多保留 100 万条消息(防止内存溢出)
Approx: true, // 使用近似裁剪(性能更好)
Values: map[string]interface{}{
"data": data,
},
}).Err()
}
// Close 关闭连接
func (p *RedisPublisher) Close() error {
return p.client.Close()
}
```
#### 使用示例
```go
// Worker 扫描流程
func (w *Worker) ScanEndpoints(ctx context.Context, scanID int) error {
// 初始化 Redis 发布者
publisher, err := queue.NewRedisPublisher(os.Getenv("REDIS_URL"))
if err != nil {
return err
}
defer publisher.Close()
// 扫描 Endpoint
for endpoint := range w.scan() {
// 发布到 Redis Stream非阻塞超快
if err := publisher.PublishEndpoint(ctx, scanID, endpoint); err != nil {
log.Printf("Failed to publish endpoint: %v", err)
// 可以选择重试或记录错误
}
}
return nil
}
```
### 2. Server 端:消费 Redis Stream
#### 代码结构
```
server/internal/queue/
├── redis_consumer.go # Redis 消费者
├── batch_writer.go # 批量写入器
└── types.go # 数据类型定义
```
#### 核心实现
```go
// server/internal/queue/redis_consumer.go
package queue
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/redis/go-redis/v9"
"github.com/yyhuni/lunafox/server/internal/repository"
)
type EndpointConsumer struct {
client *redis.Client
repository *repository.EndpointRepository
}
func NewEndpointConsumer(redisURL string, repo *repository.EndpointRepository) (*EndpointConsumer, error) {
opt, err := redis.ParseURL(redisURL)
if err != nil {
return nil, err
}
client := redis.NewClient(opt)
return &EndpointConsumer{
client: client,
repository: repo,
}, nil
}
// Start 启动消费者(单线程,控制写入速度)
func (c *EndpointConsumer) Start(ctx context.Context, scanID int) error {
streamName := fmt.Sprintf("endpoints:%d", scanID)
groupName := "endpoint-consumers"
consumerName := fmt.Sprintf("server-%d", time.Now().Unix())
// 创建消费者组(如果不存在)
c.client.XGroupCreateMkStream(ctx, streamName, groupName, "0")
// 批量写入器(每 5000 条批量写入)
batchWriter := NewBatchWriter(c.repository, 5000)
defer batchWriter.Flush()
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
// 读取消息(批量)
streams, err := c.client.XReadGroup(ctx, &redis.XReadGroupArgs{
Group: groupName,
Consumer: consumerName,
Streams: []string{streamName, ">"},
Count: 100, // 每次读取 100 条
Block: 1000, // 阻塞 1 秒
}).Result()
if err != nil {
if err == redis.Nil {
continue // 没有新消息
}
return err
}
// 处理消息
for _, stream := range streams {
for _, message := range stream.Messages {
// 解析消息
var endpoint Endpoint
if err := json.Unmarshal([]byte(message.Values["data"].(string)), &endpoint); err != nil {
// 记录错误,继续处理下一条
continue
}
// 添加到批量写入器
if err := batchWriter.Add(endpoint); err != nil {
return err
}
// 确认消息ACK
c.client.XAck(ctx, streamName, groupName, message.ID)
}
}
// 定期 Flush
if batchWriter.ShouldFlush() {
if err := batchWriter.Flush(); err != nil {
return err
}
}
}
}
// Close 关闭连接
func (c *EndpointConsumer) Close() error {
return c.client.Close()
}
```
#### 批量写入器
```go
// server/internal/queue/batch_writer.go
package queue
import (
"sync"
"github.com/yyhuni/lunafox/server/internal/model"
"github.com/yyhuni/lunafox/server/internal/repository"
)
type BatchWriter struct {
repository *repository.EndpointRepository
buffer []model.Endpoint
batchSize int
mu sync.Mutex
}
func NewBatchWriter(repo *repository.EndpointRepository, batchSize int) *BatchWriter {
return &BatchWriter{
repository: repo,
batchSize: batchSize,
buffer: make([]model.Endpoint, 0, batchSize),
}
}
// Add 添加到缓冲区
func (w *BatchWriter) Add(endpoint model.Endpoint) error {
w.mu.Lock()
w.buffer = append(w.buffer, endpoint)
shouldFlush := len(w.buffer) >= w.batchSize
w.mu.Unlock()
if shouldFlush {
return w.Flush()
}
return nil
}
// ShouldFlush 是否应该 Flush
func (w *BatchWriter) ShouldFlush() bool {
w.mu.Lock()
defer w.mu.Unlock()
return len(w.buffer) >= w.batchSize
}
// Flush 批量写入数据库
func (w *BatchWriter) Flush() error {
w.mu.Lock()
if len(w.buffer) == 0 {
w.mu.Unlock()
return nil
}
// 复制缓冲区
toWrite := make([]model.Endpoint, len(w.buffer))
copy(toWrite, w.buffer)
w.buffer = w.buffer[:0]
w.mu.Unlock()
// 批量写入(使用现有的 BulkUpsert 方法)
_, err := w.repository.BulkUpsert(toWrite)
return err
}
```
### 3. Server 启动消费者
```go
// server/internal/app/app.go
func Run(ctx context.Context, cfg config.Config) error {
// ... 现有代码
// 启动 Redis 消费者(后台运行)
consumer, err := queue.NewEndpointConsumer(cfg.RedisURL, endpointRepo)
if err != nil {
return err
}
go func() {
// 消费所有活跃的扫描任务
for {
// 获取活跃的扫描任务
scans := scanRepo.GetActiveScans()
for _, scan := range scans {
go consumer.Start(ctx, scan.ID)
}
time.Sleep(10 * time.Second)
}
}()
// ... 现有代码
}
```
## 性能对比
### 50 万 Endpoint每个 15 KB
| 方案 | 写入速度 | 总时间 | 内存占用 | Worker 阻塞 |
|------|---------|--------|---------|-----------|
| **当前HTTP 批量)** | 100 条/秒 | 83 分钟 | 1.5 MB | 否 |
| **Redis Stream** | 1000 条/秒 | 8 分钟 | 75 MB | 否 |
**提升****10 倍性能!**
## 资源消耗
### Redis 资源消耗
| 项目 | 消耗 |
|------|------|
| 内存 | ~500 MB缓冲 100 万条消息) |
| CPU | ~10%(序列化/反序列化) |
| 磁盘 | ~7.5 GBAOF 持久化) |
| 带宽 | ~50 MB/s |
### Server 资源消耗
| 项目 | 消耗 |
|------|------|
| 内存 | 75 MB批量写入缓冲 |
| CPU | 30%(反序列化 + 数据库写入) |
| 数据库连接 | 1 个(单线程消费) |
## 可靠性保证
### 数据不丢失
1. **Redis AOF 持久化**:每秒同步到磁盘,最多丢失 1 秒数据
2. **消息确认机制**Server 处理成功后才 ACK
3. **自动重试**:未 ACK 的消息会自动重新入队
### 故障恢复
| 故障场景 | 恢复机制 |
|---------|---------|
| Worker 崩溃 | 消息已发送到 Redis不影响 |
| Redis 崩溃 | AOF 恢复,最多丢失 1 秒数据 |
| Server 崩溃 | 未 ACK 的消息重新入队 |
| 数据库崩溃 | 消息保留在 Redis恢复后继续消费 |
## 扩展性
### 多 Worker 支持
- Redis Stream 原生支持多个生产者
- 无需额外配置
### 多 Server 消费者
```go
// 启动多个消费者(负载均衡)
for i := 0; i < 3; i++ {
go consumer.Start(ctx, scanID)
}
```
Redis Stream 的消费者组会自动分配消息,实现负载均衡。
## 监控和运维
### 监控指标
```go
// 获取队列长度
func (c *EndpointConsumer) GetQueueLength(ctx context.Context, scanID int) (int64, error) {
streamName := fmt.Sprintf("endpoints:%d", scanID)
return c.client.XLen(ctx, streamName).Result()
}
// 获取消费者组信息
func (c *EndpointConsumer) GetConsumerGroupInfo(ctx context.Context, scanID int) ([]redis.XInfoGroup, error) {
streamName := fmt.Sprintf("endpoints:%d", scanID)
return c.client.XInfoGroups(ctx, streamName).Result()
}
```
### 清理策略
```go
// 扫描完成后清理 Stream
func (c *EndpointConsumer) CleanupStream(ctx context.Context, scanID int) error {
streamName := fmt.Sprintf("endpoints:%d", scanID)
return c.client.Del(ctx, streamName).Err()
}
```
## 配置建议
### Redis 配置
```conf
# redis.conf
# 持久化
appendonly yes
appendfsync everysec
# 内存
maxmemory 2gb
maxmemory-policy allkeys-lru
# 性能
tcp-backlog 511
timeout 0
tcp-keepalive 300
```
### 环境变量
```bash
# Worker 端
REDIS_URL=redis://localhost:6379/0
# Server 端
REDIS_URL=redis://localhost:6379/0
```
## 迁移步骤
### 阶段 1准备1 天)
1. 启用 Redis AOF 持久化
2. 实现 Worker 端 Redis 发布者
3. 实现 Server 端 Redis 消费者
### 阶段 2测试2 天)
1. 单元测试
2. 集成测试
3. 性能测试(模拟 50 万数据)
### 阶段 3灰度发布3 天)
1. 10% 流量使用 Redis Stream
2. 50% 流量使用 Redis Stream
3. 100% 流量使用 Redis Stream
### 阶段 4清理1 天)
1. 移除旧的 HTTP 批量写入代码
2. 更新文档
## 风险和缓解
### 风险 1Redis 内存溢出
**缓解**
- 设置 `maxmemory` 限制
- 使用 `MaxLen` 限制 Stream 长度
- 监控 Redis 内存使用
### 风险 2消息积压
**缓解**
- 增加 Server 消费者数量
- 优化数据库写入性能
- 监控队列长度
### 风险 3数据丢失
**缓解**
- 启用 AOF 持久化
- 使用消息确认机制
- 定期备份 Redis
## 总结
### 优势
- ✅ 性能提升 10 倍
- ✅ Worker 永不阻塞
- ✅ 数据不丢失AOF 持久化)
- ✅ 无需部署新组件(利用现有 Redis
- ✅ 架构简单,易于维护
### 适用场景
- 数据量 > 10 万
- 已有 Redis
- 需要高性能写入
- 不需要复杂的消息路由
### 不适用场景
- 数据量 < 10 万(当前方案足够)
- 需要复杂的消息路由(考虑 RabbitMQ
- 数据量 > 1000 万(考虑 Kafka
## 参考资料
- [Redis Stream 官方文档](https://redis.io/docs/data-types/streams/)
- [Redis 持久化](https://redis.io/docs/management/persistence/)
- [go-redis 文档](https://redis.uptrace.dev/)

1
frontend/.gitignore vendored
View File

@@ -9,6 +9,7 @@
!.yarn/plugins
!.yarn/releases
!.yarn/versions
.pnpm-store/
# testing
/coverage

60
frontend/Dockerfile Normal file
View File

@@ -0,0 +1,60 @@
# Frontend Next.js Dockerfile
# Multi-stage build with BuildKit caching
# ==================== Dependencies stage ====================
FROM node:20.20.0-alpine AS deps
WORKDIR /app
# Install pnpm
RUN corepack enable && corepack prepare pnpm@latest --activate
# Copy dependency manifests
COPY frontend/package.json frontend/pnpm-lock.yaml ./
# Install dependencies (BuildKit cache)
RUN --mount=type=cache,target=/root/.local/share/pnpm/store \
pnpm install --frozen-lockfile
# ==================== Build stage ====================
FROM node:20.20.0-alpine AS builder
WORKDIR /app
RUN corepack enable && corepack prepare pnpm@latest --activate
# Copy deps
COPY --from=deps /app/node_modules ./node_modules
COPY frontend/ ./
# Build-time env
ARG IMAGE_TAG=unknown
ENV NEXT_PUBLIC_IMAGE_TAG=${IMAGE_TAG}
# Use service name "server" inside Docker network
ENV API_HOST=server
# Build (BuildKit cache)
RUN --mount=type=cache,target=/app/.next/cache \
pnpm build
# ==================== Runtime stage ====================
FROM node:20.20.0-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy build output
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]

View File

@@ -3,7 +3,9 @@ import type { Metadata } from "next"
import { NextIntlClientProvider } from 'next-intl'
import { getMessages, setRequestLocale, getTranslations } from 'next-intl/server'
import { notFound } from 'next/navigation'
import { cookies } from "next/headers"
import { locales, localeHtmlLang, type Locale } from '@/i18n/config'
import { COLOR_THEME_COOKIE_KEY, isColorThemeId, DEFAULT_COLOR_THEME_ID, isDarkColorTheme } from "@/lib/color-themes"
// Import global style files
import "../globals.css"
@@ -25,6 +27,7 @@ import Script from "next/script"
import { QueryProvider } from "@/components/providers/query-provider"
import { ThemeProvider } from "@/components/providers/theme-provider"
import { UiI18nProvider } from "@/components/providers/ui-i18n-provider"
import { ColorThemeInit } from "@/components/color-theme-init"
// Import common layout components
import { RoutePrefetch } from "@/components/route-prefetch"
@@ -40,8 +43,15 @@ export async function generateMetadata({ params }: { params: Promise<{ locale: s
title: t('title'),
description: t('description'),
keywords: t('keywords').split(',').map(k => k.trim()),
generator: "Xingrin ASM Platform",
generator: "LunaFox ASM Platform",
authors: [{ name: "yyhuni" }],
icons: {
icon: [
{ url: "/images/icon-64.png", sizes: "64x64", type: "image/png" },
{ url: "/images/icon-256.png", sizes: "256x256", type: "image/png" },
],
apple: [{ url: "/images/icon-256.png", sizes: "256x256", type: "image/png" }],
},
openGraph: {
title: t('ogTitle'),
description: t('ogDescription'),
@@ -94,9 +104,20 @@ export default async function LocaleLayout({
// Load translation messages
const messages = await getMessages()
const cookieStore = await cookies()
const cookieTheme = cookieStore.get(COLOR_THEME_COOKIE_KEY)?.value
const themeId = isColorThemeId(cookieTheme) ? cookieTheme : DEFAULT_COLOR_THEME_ID
const isDark = isDarkColorTheme(themeId)
return (
<html lang={localeHtmlLang[locale as Locale]} suppressHydrationWarning>
<html
lang={localeHtmlLang[locale as Locale]}
data-theme={themeId}
className={isDark ? "dark" : undefined}
suppressHydrationWarning
>
<body className={fontConfig.className} style={fontConfig.style}>
<ColorThemeInit />
{/* Load external scripts */}
<Script
src="https://tweakcn.com/live-preview.min.js"
@@ -110,7 +131,7 @@ export default async function LocaleLayout({
{/* ThemeProvider provides theme switching functionality */}
<ThemeProvider
attribute="class"
defaultTheme="dark"
defaultTheme={isDark ? "dark" : "light"}
enableSystem
disableTransitionOnChange
>

View File

@@ -24,5 +24,9 @@ export default function LoginLayout({
}: {
children: React.ReactNode
}) {
return children
return (
<>
{children}
</>
)
}

View File

@@ -2,124 +2,351 @@
import React from "react"
import { useRouter } from "next/navigation"
import { useTranslations } from "next-intl"
import Lottie from "lottie-react"
import securityAnimation from "@/public/animations/Security000-Purple.json"
import { Button } from "@/components/ui/button"
import { Input } from "@/components/ui/input"
import { Card, CardContent } from "@/components/ui/card"
import {
Field,
FieldGroup,
FieldLabel,
} from "@/components/ui/field"
import { Spinner } from "@/components/ui/spinner"
import { useLocale, useTranslations } from "next-intl"
import { useQueryClient } from "@tanstack/react-query"
import { TerminalLogin } from "@/components/ui/terminal-login"
import { LoadingState } from "@/components/loading-spinner"
import { useLogin, useAuth } from "@/hooks/use-auth"
import { useRoutePrefetch } from "@/hooks/use-route-prefetch"
import { vulnerabilityKeys } from "@/hooks/use-vulnerabilities"
import { getAssetStatistics, getStatisticsHistory } from "@/services/dashboard.service"
import { getScans } from "@/services/scan.service"
import { VulnerabilityService } from "@/services/vulnerability.service"
export default function LoginPage() {
// Preload all page components on login page
useRoutePrefetch()
const router = useRouter()
const queryClient = useQueryClient()
const { data: auth, isLoading: authLoading } = useAuth()
const { mutate: login, isPending } = useLogin()
const t = useTranslations("auth")
const [username, setUsername] = React.useState("")
const [password, setPassword] = React.useState("")
const { mutateAsync: login, isPending } = useLogin()
const t = useTranslations("auth.terminal")
const locale = useLocale()
// If already logged in, redirect to dashboard
const loginStartedRef = React.useRef(false)
const [loginReady, setLoginReady] = React.useState(false)
const [isReady, setIsReady] = React.useState(false)
const [loginProcessing, setLoginProcessing] = React.useState(false)
const [isExiting, setIsExiting] = React.useState(false)
const exitStartedRef = React.useRef(false)
const showLoading = !isReady || loginProcessing
const showExitOverlay = isExiting
const withLocale = React.useCallback((path: string) => {
if (path.startsWith(`/${locale}/`)) return path
const normalized = path.startsWith("/") ? path : `/${path}`
return `/${locale}${normalized}`
}, [locale])
// Hide the inline boot splash and show login content
React.useEffect(() => {
if (auth?.authenticated) {
router.push("/dashboard/")
let cancelled = false
const waitForLoad = new Promise<void>((resolve) => {
if (typeof document === "undefined") {
resolve()
return
}
if (document.readyState === "complete") {
resolve()
return
}
const handleLoad = () => resolve()
window.addEventListener("load", handleLoad, { once: true })
})
const waitForPrefetch = new Promise<void>((resolve) => {
if (typeof window === "undefined") {
resolve()
return
}
const w = window as Window & { __lunafoxRoutePrefetchDone?: boolean }
if (w.__lunafoxRoutePrefetchDone) {
resolve()
return
}
const handlePrefetchDone = () => resolve()
window.addEventListener("lunafox:route-prefetch-done", handlePrefetchDone, { once: true })
})
const waitForPrefetchOrTimeout = Promise.race([
waitForPrefetch,
new Promise<void>((resolve) => setTimeout(resolve, 3000)),
])
Promise.all([waitForLoad, waitForPrefetchOrTimeout]).then(() => {
if (cancelled) return
setIsReady(true)
})
return () => {
cancelled = true
}
}, [auth, router])
}, [])
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault()
login({ username, password })
}
// 提取预加载逻辑为可复用函数
const prefetchDashboardData = React.useCallback(async () => {
const scansParams = { page: 1, pageSize: 10 }
const vulnsParams = { page: 1, pageSize: 10 }
// Show spinner while loading
if (authLoading) {
return (
<div className="flex min-h-svh w-full flex-col items-center justify-center gap-4 bg-background">
<Spinner className="size-8 text-primary" />
<p className="text-muted-foreground text-sm" suppressHydrationWarning>loading...</p>
</div>
)
}
return Promise.allSettled([
queryClient.prefetchQuery({
queryKey: ["asset", "statistics"],
queryFn: getAssetStatistics,
}),
queryClient.prefetchQuery({
queryKey: ["asset", "statistics", "history", 7],
queryFn: () => getStatisticsHistory(7),
}),
queryClient.prefetchQuery({
queryKey: ["scans", scansParams],
queryFn: () => getScans(scansParams),
}),
queryClient.prefetchQuery({
queryKey: vulnerabilityKeys.list(vulnsParams),
queryFn: () => VulnerabilityService.getAllVulnerabilities(vulnsParams),
}),
])
}, [queryClient])
// Don't show login page if already logged in
if (auth?.authenticated) {
return null
// Memoize translations object to avoid recreating on every render
const translations = React.useMemo(() => ({
title: t("title"),
subtitle: t("subtitle"),
usernamePrompt: t("usernamePrompt"),
passwordPrompt: t("passwordPrompt"),
authenticating: t("authenticating"),
processing: t("processing"),
accessGranted: t("accessGranted"),
welcomeMessage: t("welcomeMessage"),
authFailed: t("authFailed"),
invalidCredentials: t("invalidCredentials"),
shortcuts: t("shortcuts"),
submit: t("submit"),
cancel: t("cancel"),
clear: t("clear"),
startEnd: t("startEnd"),
}), [t])
// If already logged in, warm up the dashboard, then redirect.
React.useEffect(() => {
if (authLoading) return
if (!auth?.authenticated) return
if (loginStartedRef.current) return
let cancelled = false
let timer: number | undefined
void (async () => {
setLoginProcessing(true)
await prefetchDashboardData()
if (cancelled) return
setLoginProcessing(false)
if (!exitStartedRef.current) {
exitStartedRef.current = true
setIsExiting(true)
timer = window.setTimeout(() => {
router.replace(withLocale("/dashboard/"))
}, 300)
}
})()
return () => {
cancelled = true
if (timer) window.clearTimeout(timer)
}
}, [auth?.authenticated, authLoading, prefetchDashboardData, router, withLocale])
React.useEffect(() => {
if (!loginReady) return
if (exitStartedRef.current) return
exitStartedRef.current = true
setIsExiting(true)
const timer = window.setTimeout(() => {
router.replace(withLocale("/dashboard/"))
}, 300)
return () => window.clearTimeout(timer)
}, [loginReady, router, withLocale])
const handleLogin = async (username: string, password: string) => {
loginStartedRef.current = true
setLoginReady(false)
setLoginProcessing(true)
// 并行执行独立操作:登录验证 + 预加载 dashboard bundle
const [loginRes] = await Promise.all([
login({ username, password }),
router.prefetch(withLocale("/dashboard/")),
])
// 预加载 dashboard 数据
await prefetchDashboardData()
// Prime auth cache so AuthLayout doesn't flash a full-screen loading state.
queryClient.setQueryData(["auth", "me"], {
authenticated: true,
user: loginRes.user,
})
setLoginProcessing(false)
setLoginReady(true)
}
return (
<div className="login-bg flex min-h-svh flex-col p-6 md:p-10">
{/* Main content area */}
<div className="flex-1 flex items-center justify-center">
<div className="w-full max-w-sm md:max-w-4xl">
<Card className="overflow-hidden p-0">
<CardContent className="grid p-0 md:grid-cols-2">
<form className="p-6 md:p-8" onSubmit={handleSubmit}>
<FieldGroup>
{/* Fingerprint identifier - for FOFA/Shodan and other search engines to identify */}
<meta name="generator" content="Xingrin ASM Platform" />
<div className="flex flex-col items-center gap-2 text-center">
<h1 className="text-2xl font-bold">{t("title")}</h1>
<p className="text-sm text-muted-foreground mt-1">
{t("subtitle")}
</p>
</div>
<Field>
<FieldLabel htmlFor="username">{t("username")}</FieldLabel>
<Input
id="username"
type="text"
placeholder={t("usernamePlaceholder")}
value={username}
onChange={(e) => setUsername(e.target.value)}
required
autoFocus
/>
</Field>
<Field>
<FieldLabel htmlFor="password">{t("password")}</FieldLabel>
<Input
id="password"
type="password"
placeholder={t("passwordPlaceholder")}
value={password}
onChange={(e) => setPassword(e.target.value)}
required
/>
</Field>
<Field>
<Button type="submit" className="w-full" disabled={isPending}>
{isPending ? t("loggingIn") : t("login")}
</Button>
</Field>
</FieldGroup>
</form>
<div className="bg-primary/5 relative hidden md:flex md:items-center md:justify-center">
<div className="text-center p-4">
<Lottie
animationData={securityAnimation}
loop={true}
className="w-96 h-96 mx-auto"
/>
</div>
</div>
</CardContent>
</Card>
<div className="relative flex min-h-svh flex-col bg-background text-foreground">
{showLoading && !showExitOverlay ? (
<LoadingState
active
message="loading..."
className="fixed inset-0 z-50 bg-background"
/>
) : null}
{showExitOverlay ? (
<div className="fixed inset-0 z-50 bg-background" />
) : null}
{/* Circuit Board Animation */}
<div className={`fixed inset-0 z-0 transition-opacity duration-300 ${isReady ? "opacity-100" : "opacity-0"}`}>
<div className="circuit-container">
{/* Grid pattern */}
<div className="circuit-grid" />
{/* === Main backbone traces === */}
{/* Horizontal main lines - 6 lines */}
<div className="trace trace-h" style={{ top: '12%', left: 0, width: '100%' }}>
<div className="trace-glow" style={{ animationDuration: '6s' }} />
</div>
<div className="trace trace-h" style={{ top: '28%', left: 0, width: '100%' }}>
<div className="trace-glow" style={{ animationDelay: '1s', animationDuration: '5s' }} />
</div>
<div className="trace trace-h" style={{ top: '44%', left: 0, width: '100%' }}>
<div className="trace-glow" style={{ animationDelay: '2s', animationDuration: '5.5s' }} />
</div>
<div className="trace trace-h" style={{ top: '60%', left: 0, width: '100%' }}>
<div className="trace-glow" style={{ animationDelay: '3s', animationDuration: '4.5s' }} />
</div>
<div className="trace trace-h" style={{ top: '76%', left: 0, width: '100%' }}>
<div className="trace-glow" style={{ animationDelay: '4s', animationDuration: '5s' }} />
</div>
<div className="trace trace-h" style={{ top: '92%', left: 0, width: '100%' }}>
<div className="trace-glow" style={{ animationDelay: '5s', animationDuration: '6s' }} />
</div>
{/* Vertical main lines - 6 lines */}
<div className="trace trace-v" style={{ left: '8%', top: 0, height: '100%' }}>
<div className="trace-glow trace-glow-v" style={{ animationDelay: '0.5s', animationDuration: '7s' }} />
</div>
<div className="trace trace-v" style={{ left: '24%', top: 0, height: '100%' }}>
<div className="trace-glow trace-glow-v" style={{ animationDelay: '1.5s', animationDuration: '6s' }} />
</div>
<div className="trace trace-v" style={{ left: '40%', top: 0, height: '100%' }}>
<div className="trace-glow trace-glow-v" style={{ animationDelay: '2.5s', animationDuration: '5.5s' }} />
</div>
<div className="trace trace-v" style={{ left: '56%', top: 0, height: '100%' }}>
<div className="trace-glow trace-glow-v" style={{ animationDelay: '3.5s', animationDuration: '6.5s' }} />
</div>
<div className="trace trace-v" style={{ left: '72%', top: 0, height: '100%' }}>
<div className="trace-glow trace-glow-v" style={{ animationDelay: '4.5s', animationDuration: '5s' }} />
</div>
<div className="trace trace-v" style={{ left: '88%', top: 0, height: '100%' }}>
<div className="trace-glow trace-glow-v" style={{ animationDelay: '5.5s', animationDuration: '6s' }} />
</div>
</div>
<style jsx>{`
.circuit-container {
position: absolute;
inset: 0;
background: var(--background);
overflow: hidden;
--login-grid: color-mix(in oklch, var(--foreground) 6%, transparent);
--login-trace: color-mix(in oklch, var(--foreground) 16%, transparent);
--login-glow: color-mix(in oklch, var(--primary) 65%, transparent);
--login-glow-muted: color-mix(in oklch, var(--foreground) 45%, transparent);
}
.circuit-grid {
position: absolute;
inset: 0;
background-image:
linear-gradient(var(--login-grid) 1px, transparent 1px),
linear-gradient(90deg, var(--login-grid) 1px, transparent 1px);
background-size: 40px 40px;
}
.trace {
position: absolute;
background: var(--login-trace);
overflow: hidden;
}
.trace-h {
height: 2px;
}
.trace-v {
width: 2px;
}
.trace-glow {
position: absolute;
top: -2px;
left: -20%;
width: 30%;
height: 6px;
background: linear-gradient(90deg, transparent, var(--login-glow), var(--login-glow-muted), transparent);
animation: traceFlow 3s linear infinite;
filter: blur(2px);
}
.trace-glow-v {
top: -20%;
left: -2px;
width: 6px;
height: 30%;
background: linear-gradient(180deg, transparent, var(--login-glow), var(--login-glow-muted), transparent);
animation: traceFlowV 3s linear infinite;
}
@keyframes traceFlow {
0% { left: -30%; }
100% { left: 100%; }
}
@keyframes traceFlowV {
0% { top: -30%; }
100% { top: 100%; }
}
`}</style>
</div>
{/* Fingerprint identifier - for FOFA/Shodan and other search engines to identify */}
<meta name="generator" content="LunaFox ASM Platform" />
{/* Main content area */}
<div
className={`relative z-10 flex-1 flex items-center justify-center p-6 transition-[opacity,transform] duration-300 ${
isReady ? "opacity-100 translate-y-0" : "opacity-0 translate-y-2"
}`}
>
<TerminalLogin
onLogin={handleLogin}
authDone={loginReady}
isPending={isPending}
translations={translations}
className={`transition-[opacity,transform] duration-300 ${
isExiting ? "opacity-0 scale-[0.98]" : "opacity-100 scale-100"
}`}
/>
</div>
{/* Version number - fixed at the bottom of the page */}
<div className="flex-shrink-0 text-center py-4">
<div
className={`relative z-10 flex-shrink-0 text-center py-4 transition-opacity duration-300 ${
isReady && !isExiting ? "opacity-100" : "opacity-0"
}`}
>
<p className="text-xs text-muted-foreground">
{process.env.NEXT_PUBLIC_VERSION || 'dev'}
{process.env.NEXT_PUBLIC_IMAGE_TAG || "dev"}
</p>
</div>
</div>

View File

@@ -1,7 +1,7 @@
"use client"
import React, { useState, useMemo } from "react"
import { Settings, Search, Pencil, Trash2, Check, X, Plus } from "lucide-react"
import { Settings, Search, Pencil, Trash2, Check, Plus, Lock, AlertTriangle, ChevronDown, ChevronRight } from "lucide-react"
import * as yaml from "js-yaml"
import Editor from "@monaco-editor/react"
import { useTranslations } from "next-intl"
@@ -11,6 +11,11 @@ import { Input } from "@/components/ui/input"
import { Badge } from "@/components/ui/badge"
import { ScrollArea } from "@/components/ui/scroll-area"
import { Separator } from "@/components/ui/separator"
import {
Collapsible,
CollapsibleContent,
CollapsibleTrigger,
} from "@/components/ui/collapsible"
import {
AlertDialog,
AlertDialogAction,
@@ -22,9 +27,9 @@ import {
AlertDialogTitle,
} from "@/components/ui/alert-dialog"
import { EngineEditDialog, EngineCreateDialog } from "@/components/scan/engine"
import { useEngines, useCreateEngine, useUpdateEngine, useDeleteEngine } from "@/hooks/use-engines"
import { useEngines, usePresetEngines, useCreateEngine, useUpdateEngine, useDeleteEngine } from "@/hooks/use-engines"
import { cn } from "@/lib/utils"
import type { ScanEngine } from "@/types/engine.types"
import type { ScanEngine, PresetEngine } from "@/types/engine.types"
import { MasterDetailSkeleton } from "@/components/ui/master-detail-skeleton"
/** Feature configuration item definition - corresponds to YAML configuration structure */
@@ -42,7 +47,7 @@ const FEATURE_LIST = [
type FeatureKey = typeof FEATURE_LIST[number]["key"]
/** Parse engine configuration to get enabled features */
function parseEngineFeatures(engine: ScanEngine): Record<FeatureKey, boolean> {
function parseEngineFeatures(configuration?: string): Record<FeatureKey, boolean> {
const defaultFeatures: Record<FeatureKey, boolean> = {
subdomain_discovery: false,
port_scan: false,
@@ -54,10 +59,10 @@ function parseEngineFeatures(engine: ScanEngine): Record<FeatureKey, boolean> {
vuln_scan: false,
}
if (!engine.configuration) return defaultFeatures
if (!configuration) return defaultFeatures
try {
const config = yaml.load(engine.configuration) as Record<string, unknown>
const config = yaml.load(configuration) as Record<string, unknown>
if (!config) return defaultFeatures
return {
@@ -76,22 +81,31 @@ function parseEngineFeatures(engine: ScanEngine): Record<FeatureKey, boolean> {
}
/** Calculate the number of enabled features */
function countEnabledFeatures(engine: ScanEngine) {
const features = parseEngineFeatures(engine)
function countEnabledFeatures(configuration?: string) {
const features = parseEngineFeatures(configuration)
return Object.values(features).filter(Boolean).length
}
/** Selection type for engine list */
type EngineSelection =
| { type: 'preset'; engine: PresetEngine }
| { type: 'user'; engine: ScanEngine }
| null
/**
* Scan engine page
*/
export default function ScanEnginePage() {
const [selectedId, setSelectedId] = useState<number | null>(null)
const [selection, setSelection] = useState<EngineSelection>(null)
const [searchQuery, setSearchQuery] = useState("")
const [editingEngine, setEditingEngine] = useState<ScanEngine | null>(null)
const [isEditDialogOpen, setIsEditDialogOpen] = useState(false)
const [isCreateDialogOpen, setIsCreateDialogOpen] = useState(false)
const [createFromPreset, setCreateFromPreset] = useState<PresetEngine | null>(null)
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false)
const [engineToDelete, setEngineToDelete] = useState<ScanEngine | null>(null)
const [presetsOpen, setPresetsOpen] = useState(true)
const [myEnginesOpen, setMyEnginesOpen] = useState(true)
const { currentTheme } = useColorTheme()
@@ -102,29 +116,43 @@ export default function ScanEnginePage() {
const tEngine = useTranslations("scan.engine")
// API Hooks
const { data: engines = [], isLoading } = useEngines()
const { data: presetEngines = [], isLoading: isLoadingPresets } = usePresetEngines()
const { data: userEngines = [], isLoading: isLoadingEngines } = useEngines()
const createEngineMutation = useCreateEngine()
const updateEngineMutation = useUpdateEngine()
const deleteEngineMutation = useDeleteEngine()
// Filter engine list
const filteredEngines = useMemo(() => {
if (!searchQuery.trim()) return engines
const isLoading = isLoadingPresets || isLoadingEngines
// Filter engine lists based on search query
const filteredPresetEngines = useMemo(() => {
if (!searchQuery.trim()) return presetEngines
const query = searchQuery.toLowerCase()
return engines.filter((e) => e.name.toLowerCase().includes(query))
}, [engines, searchQuery])
return presetEngines.filter((e) => e.name.toLowerCase().includes(query))
}, [presetEngines, searchQuery])
// Selected engine
const selectedEngine = useMemo(() => {
if (!selectedId) return null
return engines.find((e) => e.id === selectedId) || null
}, [selectedId, engines])
const filteredUserEngines = useMemo(() => {
if (!searchQuery.trim()) return userEngines
const query = searchQuery.toLowerCase()
return userEngines.filter((e) => e.name.toLowerCase().includes(query))
}, [userEngines, searchQuery])
// Selected engine's feature status
// Get selected features
const selectedFeatures = useMemo(() => {
if (!selectedEngine) return null
return parseEngineFeatures(selectedEngine)
}, [selectedEngine])
if (!selection) return null
const config = selection.type === 'preset'
? selection.engine.configuration
: selection.engine.configuration
return parseEngineFeatures(config)
}, [selection])
const handleSelectPreset = (preset: PresetEngine) => {
setSelection({ type: 'preset', engine: preset })
}
const handleSelectUserEngine = (engine: ScanEngine) => {
setSelection({ type: 'user', engine })
}
const handleEdit = (engine: ScanEngine) => {
setEditingEngine(engine)
@@ -147,8 +175,8 @@ export default function ScanEnginePage() {
if (!engineToDelete) return
deleteEngineMutation.mutate(engineToDelete.id, {
onSuccess: () => {
if (selectedId === engineToDelete.id) {
setSelectedId(null)
if (selection?.type === 'user' && selection.engine.id === engineToDelete.id) {
setSelection(null)
}
setDeleteDialogOpen(false)
setEngineToDelete(null)
@@ -161,6 +189,12 @@ export default function ScanEnginePage() {
name,
configuration: yamlContent,
})
setCreateFromPreset(null)
}
const handleOpenCreateDialog: React.MouseEventHandler<HTMLButtonElement> = () => {
setCreateFromPreset(null)
setIsCreateDialogOpen(true)
}
// Loading state
@@ -184,7 +218,7 @@ export default function ScanEnginePage() {
/>
</div>
</div>
<Button onClick={() => setIsCreateDialogOpen(true)}>
<Button onClick={handleOpenCreateDialog}>
<Plus className="h-4 w-4 mr-1" />
{tEngine("createEngine")}
</Button>
@@ -196,64 +230,155 @@ export default function ScanEnginePage() {
<div className="flex flex-1 min-h-0">
{/* Left: Engine list */}
<div className="w-72 lg:w-80 border-r flex flex-col">
<div className="px-4 py-3 border-b">
<h2 className="text-sm font-medium text-muted-foreground">
{tEngine("engineList")} ({filteredEngines.length})
</h2>
</div>
<ScrollArea className="flex-1">
{isLoading ? (
<div className="p-4 text-sm text-muted-foreground">{tCommon("loading")}</div>
) : filteredEngines.length === 0 ? (
<div className="p-4 text-sm text-muted-foreground">
{searchQuery ? tEngine("noMatchingEngine") : tEngine("noEngines")}
</div>
) : (
<div className="p-2">
{filteredEngines.map((engine) => (
<button
key={engine.id}
onClick={() => setSelectedId(engine.id)}
className={cn(
"w-full text-left rounded-lg px-3 py-2.5 transition-colors",
selectedId === engine.id
? "bg-primary/10 text-primary"
: "hover:bg-muted"
)}
>
<div className="font-medium text-sm truncate">
{engine.name}
</div>
<div className="text-xs text-muted-foreground mt-0.5">
{tEngine("featuresEnabled", { count: countEnabledFeatures(engine) })}
</div>
</button>
))}
</div>
)}
{/* Preset engines section */}
<Collapsible open={presetsOpen} onOpenChange={setPresetsOpen} className="p-2">
<CollapsibleTrigger className="flex items-center justify-between w-full px-2 py-2 hover:bg-muted rounded-lg transition-colors">
<h2 className="text-xs font-semibold text-muted-foreground uppercase tracking-wider flex items-center gap-1">
{presetsOpen ? (
<ChevronDown className="h-3.5 w-3.5" />
) : (
<ChevronRight className="h-3.5 w-3.5" />
)}
{tEngine("presetEngines")}
</h2>
<span className="text-xs text-muted-foreground">{filteredPresetEngines.length}</span>
</CollapsibleTrigger>
<CollapsibleContent className="mt-1">
{filteredPresetEngines.length === 0 ? (
<div className="px-3 py-4 text-sm text-muted-foreground text-center">
{tEngine("noMatchingEngine")}
</div>
) : (
filteredPresetEngines.map((preset) => (
<button
key={preset.id}
onClick={() => handleSelectPreset(preset)}
className={cn(
"w-full text-left rounded-lg px-3 py-2.5 transition-colors",
selection?.type === 'preset' && selection.engine.id === preset.id
? "bg-primary/10 text-primary"
: "hover:bg-muted"
)}
>
<div className="flex items-center gap-2">
<Lock className="h-3.5 w-3.5 text-muted-foreground shrink-0" />
<span className="font-medium text-sm truncate">{preset.name}</span>
</div>
<div className="text-xs text-muted-foreground mt-0.5 ml-5.5">
{tEngine("featuresEnabled", { count: preset.enabledFeatures.length })}
</div>
</button>
))
)}
</CollapsibleContent>
</Collapsible>
<Separator className="my-2" />
{/* User engines section */}
<Collapsible open={myEnginesOpen} onOpenChange={setMyEnginesOpen} className="p-2">
<CollapsibleTrigger className="flex items-center justify-between w-full px-2 py-2 hover:bg-muted rounded-lg transition-colors">
<h2 className="text-xs font-semibold text-muted-foreground uppercase tracking-wider flex items-center gap-1">
{myEnginesOpen ? (
<ChevronDown className="h-3.5 w-3.5" />
) : (
<ChevronRight className="h-3.5 w-3.5" />
)}
{tEngine("myEngines")}
</h2>
<span className="text-xs text-muted-foreground">{filteredUserEngines.length}</span>
</CollapsibleTrigger>
<CollapsibleContent className="mt-1">
{filteredUserEngines.length === 0 ? (
<div className="px-3 py-4 text-sm text-muted-foreground text-center">
{searchQuery ? tEngine("noMatchingEngine") : tEngine("noEngines")}
</div>
) : (
filteredUserEngines.map((engine) => (
<button
key={engine.id}
onClick={() => handleSelectUserEngine(engine)}
className={cn(
"w-full text-left rounded-lg px-3 py-2.5 transition-colors",
selection?.type === 'user' && selection.engine.id === engine.id
? "bg-primary/10 text-primary"
: "hover:bg-muted"
)}
>
<div className="flex items-center gap-2">
{engine.isValid === false ? (
<AlertTriangle className="h-3.5 w-3.5 text-amber-500 shrink-0" />
) : (
<Check className="h-3.5 w-3.5 text-green-500 shrink-0" />
)}
<span className="font-medium text-sm truncate">{engine.name}</span>
</div>
<div className="text-xs text-muted-foreground mt-0.5 ml-5.5">
{engine.isValid === false ? (
<span className="text-amber-500">{tEngine("configNeedsUpdate")}</span>
) : (
tEngine("featuresEnabled", { count: countEnabledFeatures(engine.configuration) })
)}
</div>
</button>
))
)}
</CollapsibleContent>
</Collapsible>
</ScrollArea>
</div>
{/* Right: Engine details */}
<div className="flex-1 flex flex-col min-w-0">
{selectedEngine && selectedFeatures ? (
{selection && selectedFeatures ? (
<>
{/* Details header */}
<div className="px-6 py-4 border-b">
<div className="flex items-start gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-primary/10 shrink-0">
<Settings className="h-5 w-5 text-primary" />
<div className={cn(
"flex h-10 w-10 items-center justify-center rounded-lg shrink-0",
selection.type === 'preset' ? "bg-muted" : "bg-primary/10"
)}>
{selection.type === 'preset' ? (
<Lock className="h-5 w-5 text-muted-foreground" />
) : (
<Settings className="h-5 w-5 text-primary" />
)}
</div>
<div className="min-w-0 flex-1">
<h2 className="text-lg font-semibold truncate">
{selectedEngine.name}
</h2>
<p className="text-sm text-muted-foreground mt-0.5">
{tEngine("updatedAt")} {new Date(selectedEngine.updatedAt).toLocaleString()}
</p>
<div className="flex items-center gap-2">
<h2 className="text-lg font-semibold truncate">
{selection.engine.name}
</h2>
{selection.type === 'preset' && (
<Badge variant="secondary" className="text-xs">
{tEngine("preset")}
</Badge>
)}
{selection.type === 'user' && selection.engine.isValid === false && (
<Badge variant="outline" className="text-amber-500 border-amber-500 text-xs">
{tEngine("needsUpdate")}
</Badge>
)}
</div>
{selection.type === 'preset' && selection.engine.description && (
<p className="text-sm text-muted-foreground mt-0.5">
{selection.engine.description}
</p>
)}
{selection.type === 'user' && (
<p className="text-sm text-muted-foreground mt-0.5">
{tEngine("updatedAt")} {new Date(selection.engine.updatedAt).toLocaleString()}
</p>
)}
</div>
<Badge variant="outline">
{tEngine("featuresCount", { count: countEnabledFeatures(selectedEngine) })}
{tEngine("featuresCount", {
count: selection.type === 'preset'
? selection.engine.enabledFeatures.length
: countEnabledFeatures(selection.engine.configuration)
})}
</Badge>
</div>
</div>
@@ -263,40 +388,37 @@ export default function ScanEnginePage() {
{/* Feature status */}
<div className="shrink-0">
<h3 className="text-sm font-medium mb-3">{tEngine("enabledFeatures")}</h3>
<div className="rounded-lg border">
<div className="grid grid-cols-3 gap-px bg-muted">
{FEATURE_LIST.map((feature) => {
const enabled = selectedFeatures[feature.key as keyof typeof selectedFeatures]
return (
<div
key={feature.key}
className={cn(
"flex items-center gap-2 px-3 py-2.5 bg-background",
enabled ? "text-foreground" : "text-muted-foreground"
)}
>
{enabled ? (
<Check className="h-4 w-4 text-green-600 shrink-0" />
) : (
<X className="h-4 w-4 text-muted-foreground/50 shrink-0" />
)}
<span className="text-sm truncate">{tEngine(`features.${feature.key}`)}</span>
</div>
)
})}
</div>
<div className="flex flex-wrap gap-2">
{FEATURE_LIST.map((feature) => {
const enabled = selectedFeatures[feature.key as keyof typeof selectedFeatures]
return (
<Badge
key={feature.key}
variant={enabled ? "default" : "outline"}
className={cn(
"text-xs",
enabled
? "bg-primary/10 text-primary hover:bg-primary/10"
: "text-muted-foreground/50"
)}
>
{enabled && <Check className="h-3 w-3 mr-1" />}
{tEngine(`features.${feature.key}`)}
</Badge>
)
})}
</div>
</div>
{/* Configuration preview */}
{selectedEngine.configuration && (
{(selection.type === 'preset' ? selection.engine.configuration : selection.engine.configuration) && (
<div className="flex-1 flex flex-col min-h-0">
<h3 className="text-sm font-medium mb-3 shrink-0">{tEngine("configPreview")}</h3>
<div className="flex-1 rounded-lg border overflow-hidden min-h-0">
<Editor
height="100%"
defaultLanguage="yaml"
value={selectedEngine.configuration}
value={selection.type === 'preset' ? selection.engine.configuration : selection.engine.configuration}
options={{
readOnly: true,
minimap: { enabled: false },
@@ -315,28 +437,30 @@ export default function ScanEnginePage() {
)}
</div>
{/* Action buttons */}
<div className="px-6 py-4 border-t flex items-center gap-2">
<Button
variant="outline"
size="sm"
onClick={() => handleEdit(selectedEngine)}
>
<Pencil className="h-4 w-4 mr-1.5" />
{tEngine("editConfig")}
</Button>
<div className="flex-1" />
<Button
variant="outline"
size="sm"
className="text-destructive hover:text-destructive"
onClick={() => handleDelete(selectedEngine)}
disabled={deleteEngineMutation.isPending}
>
<Trash2 className="h-4 w-4 mr-1.5" />
{tCommon("actions.delete")}
</Button>
</div>
{/* Action buttons - only show for user engines */}
{selection.type === 'user' && (
<div className="px-6 py-4 border-t flex items-center gap-2">
<Button
variant="outline"
size="sm"
onClick={() => handleEdit(selection.engine)}
>
<Pencil className="h-4 w-4 mr-1.5" />
{tEngine("editConfig")}
</Button>
<div className="flex-1" />
<Button
variant="outline"
size="sm"
className="text-destructive hover:text-destructive"
onClick={() => handleDelete(selection.engine)}
disabled={deleteEngineMutation.isPending}
>
<Trash2 className="h-4 w-4 mr-1.5" />
{tCommon("actions.delete")}
</Button>
</div>
)}
</>
) : (
// Unselected state
@@ -361,8 +485,12 @@ export default function ScanEnginePage() {
{/* Create engine dialog */}
<EngineCreateDialog
open={isCreateDialogOpen}
onOpenChange={setIsCreateDialogOpen}
onOpenChange={(open) => {
setIsCreateDialogOpen(open)
if (!open) setCreateFromPreset(null)
}}
onSave={handleCreateEngine}
preSelectedPreset={createFromPreset || undefined}
/>
{/* Delete confirmation dialog */}
@@ -389,4 +517,3 @@ export default function ScanEnginePage() {
</div>
)
}

View File

@@ -3,7 +3,7 @@
import React from "react"
import { usePathname, useParams } from "next/navigation"
import Link from "next/link"
import { Target, LayoutDashboard, Package, Image, ShieldAlert } from "lucide-react"
import { Target, LayoutDashboard, Package, FolderSearch, Image as ImageIcon, ShieldAlert } from "lucide-react"
import { Tabs, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { Badge } from "@/components/ui/badge"
import { Skeleton } from "@/components/ui/skeleton"
@@ -23,6 +23,7 @@ export default function ScanHistoryLayout({
// Get primary navigation active tab
const getPrimaryTab = () => {
if (pathname.includes("/overview")) return "overview"
if (pathname.includes("/directories")) return "directories"
if (pathname.includes("/screenshots")) return "screenshots"
if (pathname.includes("/vulnerabilities")) return "vulnerabilities"
// All asset pages fall under "assets"
@@ -30,8 +31,7 @@ export default function ScanHistoryLayout({
pathname.includes("/websites") ||
pathname.includes("/subdomain") ||
pathname.includes("/ip-addresses") ||
pathname.includes("/endpoints") ||
pathname.includes("/directories")
pathname.includes("/endpoints")
) {
return "assets"
}
@@ -44,7 +44,6 @@ export default function ScanHistoryLayout({
if (pathname.includes("/subdomain")) return "subdomain"
if (pathname.includes("/ip-addresses")) return "ip-addresses"
if (pathname.includes("/endpoints")) return "endpoints"
if (pathname.includes("/directories")) return "directories"
return "websites"
}
@@ -55,6 +54,7 @@ export default function ScanHistoryLayout({
const primaryPaths = {
overview: `${basePath}/overview/`,
assets: `${basePath}/websites/`, // Default to websites when clicking assets
directories: `${basePath}/directories/`,
screenshots: `${basePath}/screenshots/`,
vulnerabilities: `${basePath}/vulnerabilities/`,
}
@@ -64,23 +64,22 @@ export default function ScanHistoryLayout({
subdomain: `${basePath}/subdomain/`,
"ip-addresses": `${basePath}/ip-addresses/`,
endpoints: `${basePath}/endpoints/`,
directories: `${basePath}/directories/`,
}
// Get counts for each tab from scan data
const summary = scanData?.summary as any
const stats = scanData?.cachedStats
const counts = {
subdomain: summary?.subdomains || 0,
endpoints: summary?.endpoints || 0,
websites: summary?.websites || 0,
directories: summary?.directories || 0,
screenshots: summary?.screenshots || 0,
vulnerabilities: summary?.vulnerabilities?.total || 0,
"ip-addresses": summary?.ips || 0,
subdomain: stats?.subdomainsCount || 0,
endpoints: stats?.endpointsCount || 0,
websites: stats?.websitesCount || 0,
directories: stats?.directoriesCount || 0,
screenshots: stats?.screenshotsCount || 0,
vulnerabilities: stats?.vulnsTotal || 0,
"ip-addresses": stats?.ipsCount || 0,
}
// Calculate total assets count
const totalAssets = counts.websites + counts.subdomain + counts["ip-addresses"] + counts.endpoints + counts.directories
const totalAssets = counts.websites + counts.subdomain + counts["ip-addresses"] + counts.endpoints
// Loading state
if (isLoading) {
@@ -110,7 +109,7 @@ export default function ScanHistoryLayout({
<span className="text-muted-foreground">/</span>
<span className="font-medium flex items-center gap-1.5">
<Target className="h-4 w-4" />
{(scanData?.target as any)?.name || t("taskId", { id })}
{scanData?.target?.name || t("taskId", { id })}
</span>
</div>
@@ -135,9 +134,20 @@ export default function ScanHistoryLayout({
)}
</Link>
</TabsTrigger>
<TabsTrigger value="directories" asChild>
<Link href={primaryPaths.directories} className="flex items-center gap-1.5">
<FolderSearch className="h-4 w-4" />
{t("tabs.directories")}
{counts.directories > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.directories}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="screenshots" asChild>
<Link href={primaryPaths.screenshots} className="flex items-center gap-1.5">
<Image className="h-4 w-4" />
<ImageIcon className="h-4 w-4" />
{t("tabs.screenshots")}
{counts.screenshots > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
@@ -168,7 +178,7 @@ export default function ScanHistoryLayout({
<TabsList variant="underline">
<TabsTrigger value="websites" variant="underline" asChild>
<Link href={secondaryPaths.websites} className="flex items-center gap-0.5">
Websites
{t("tabs.websites")}
{counts.websites > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.websites}
@@ -178,7 +188,7 @@ export default function ScanHistoryLayout({
</TabsTrigger>
<TabsTrigger value="subdomain" variant="underline" asChild>
<Link href={secondaryPaths.subdomain} className="flex items-center gap-0.5">
Subdomains
{t("tabs.subdomains")}
{counts.subdomain > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.subdomain}
@@ -188,7 +198,7 @@ export default function ScanHistoryLayout({
</TabsTrigger>
<TabsTrigger value="ip-addresses" variant="underline" asChild>
<Link href={secondaryPaths["ip-addresses"]} className="flex items-center gap-0.5">
IPs
{t("tabs.ips")}
{counts["ip-addresses"] > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts["ip-addresses"]}
@@ -198,7 +208,7 @@ export default function ScanHistoryLayout({
</TabsTrigger>
<TabsTrigger value="endpoints" variant="underline" asChild>
<Link href={secondaryPaths.endpoints} className="flex items-center gap-0.5">
URLs
{t("tabs.urls")}
{counts.endpoints > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.endpoints}
@@ -206,16 +216,6 @@ export default function ScanHistoryLayout({
)}
</Link>
</TabsTrigger>
<TabsTrigger value="directories" variant="underline" asChild>
<Link href={secondaryPaths.directories} className="flex items-center gap-0.5">
Directories
{counts.directories > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.directories}
</Badge>
)}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
</div>

View File

@@ -11,7 +11,13 @@ import { Separator } from '@/components/ui/separator'
import { Badge } from '@/components/ui/badge'
import { Skeleton } from '@/components/ui/skeleton'
import { useApiKeySettings, useUpdateApiKeySettings } from '@/hooks/use-api-key-settings'
import type { ApiKeySettings } from '@/types/api-key-settings.types'
import type {
ApiKeySettings,
ProviderKey,
FofaProviderConfig,
CensysProviderConfig,
SingleFieldProviderConfig,
} from '@/types/api-key-settings.types'
// 密码输入框组件(带显示/隐藏切换)
function PasswordInput({ value, onChange, placeholder, disabled }: {
@@ -42,8 +48,31 @@ function PasswordInput({ value, onChange, placeholder, disabled }: {
)
}
type ProviderField = {
name: ProviderFieldName
label: string
type: "text" | "password"
placeholder?: string
}
type ProviderFieldName =
| keyof FofaProviderConfig
| keyof CensysProviderConfig
| keyof SingleFieldProviderConfig
type ProviderDefinition = {
key: ProviderKey
name: string
description: string
icon: React.ComponentType<{ className?: string }>
color: string
bgColor: string
fields: ProviderField[]
docUrl: string
}
// Provider 配置定义
const PROVIDERS = [
const PROVIDERS: ProviderDefinition[] = [
{
key: 'fofa',
name: 'FOFA',
@@ -171,14 +200,22 @@ export default function ApiKeysSettingsPage() {
}
}, [settings])
const updateProvider = (providerKey: string, field: string, value: any) => {
setFormData(prev => ({
...prev,
[providerKey]: {
...prev[providerKey as keyof ApiKeySettings],
const updateProvider = (
providerKey: ProviderKey,
field: ProviderFieldName,
value: string | boolean
) => {
setFormData((prev) => {
const current = prev[providerKey]
const updated = {
...current,
[field]: value,
} as typeof current
return {
...prev,
[providerKey]: updated,
}
}))
})
setHasChanges(true)
}
@@ -187,7 +224,7 @@ export default function ApiKeysSettingsPage() {
setHasChanges(false)
}
const enabledCount = Object.values(formData).filter((p: any) => p?.enabled).length
const enabledCount = Object.values(formData).filter((provider) => provider.enabled).length
if (isLoading) {
return (
@@ -223,8 +260,8 @@ export default function ApiKeysSettingsPage() {
{/* Provider 卡片列表 */}
<div className="grid gap-4">
{PROVIDERS.map((provider) => {
const data = formData[provider.key as keyof ApiKeySettings] || {}
const isEnabled = (data as any)?.enabled || false
const data = formData[provider.key]
const isEnabled = data.enabled
return (
<Card key={provider.key}>
@@ -254,25 +291,28 @@ export default function ApiKeysSettingsPage() {
<CardContent className="pt-0">
<Separator className="mb-4" />
<div className="space-y-4">
{provider.fields.map((field) => (
{provider.fields.map((field) => {
const rawValue = (data as Record<ProviderFieldName, string | boolean>)[field.name]
const fieldValue = typeof rawValue === "string" ? rawValue : ""
return (
<div key={field.name} className="space-y-2">
<label className="text-sm font-medium">{field.label}</label>
{field.type === 'password' ? (
<PasswordInput
value={(data as any)[field.name] || ''}
value={fieldValue}
onChange={(value) => updateProvider(provider.key, field.name, value)}
placeholder={field.placeholder}
/>
) : (
<Input
type="text"
value={(data as any)[field.name] || ''}
value={fieldValue}
onChange={(e) => updateProvider(provider.key, field.name, e.target.value)}
placeholder={field.placeholder}
/>
)}
</div>
))}
)})}
<p className="text-xs text-muted-foreground">
API Key
<a

View File

@@ -0,0 +1,7 @@
"use client"
import { DatabaseHealthView } from "@/components/settings/database-health"
export default function DatabaseHealthPage() {
return <DatabaseHealthView />
}

View File

@@ -29,6 +29,10 @@ export default function NotificationSettingsPage() {
enabled: z.boolean(),
webhookUrl: z.string().url(t("discord.urlInvalid")).or(z.literal('')),
}),
wecom: z.object({
enabled: z.boolean(),
webhookUrl: z.string().url(t("wecom.urlInvalid")).or(z.literal('')),
}),
categories: z.object({
scan: z.boolean(),
vulnerability: z.boolean(),
@@ -46,6 +50,15 @@ export default function NotificationSettingsPage() {
})
}
}
if (val.wecom.enabled) {
if (!val.wecom.webhookUrl || val.wecom.webhookUrl.trim() === '') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: t("wecom.requiredError"),
path: ['wecom', 'webhookUrl'],
})
}
}
})
const NOTIFICATION_CATEGORIES = [
@@ -79,6 +92,7 @@ export default function NotificationSettingsPage() {
resolver: zodResolver(schema),
values: data ?? {
discord: { enabled: false, webhookUrl: '' },
wecom: { enabled: false, webhookUrl: '' },
categories: {
scan: true,
vulnerability: true,
@@ -93,6 +107,7 @@ export default function NotificationSettingsPage() {
}
const discordEnabled = form.watch('discord.enabled')
const wecomEnabled = form.watch('wecom.enabled')
return (
<div className="p-4 md:p-6 space-y-6">
@@ -187,25 +202,59 @@ export default function NotificationSettingsPage() {
</CardHeader>
</Card>
{/* Feishu/DingTalk/WeCom - Coming soon */}
<Card className="opacity-60">
{/* 企业微信 */}
<Card>
<CardHeader className="pb-4">
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-muted">
<IconBrandSlack className="h-5 w-5 text-muted-foreground" />
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-[#07C160]/10">
<IconBrandSlack className="h-5 w-5 text-[#07C160]" />
</div>
<div>
<div className="flex items-center gap-2">
<CardTitle className="text-base">{t("enterprise.title")}</CardTitle>
<Badge variant="secondary" className="text-xs">{t("emailChannel.comingSoon")}</Badge>
</div>
<CardDescription>{t("enterprise.description")}</CardDescription>
<CardTitle className="text-base">{t("wecom.title")}</CardTitle>
<CardDescription>{t("wecom.description")}</CardDescription>
</div>
</div>
<Switch disabled />
<FormField
control={form.control}
name="wecom.enabled"
render={({ field }) => (
<FormControl>
<Switch
checked={field.value}
onCheckedChange={field.onChange}
disabled={isLoading || updateMutation.isPending}
/>
</FormControl>
)}
/>
</div>
</CardHeader>
{wecomEnabled && (
<CardContent className="pt-0">
<Separator className="mb-4" />
<FormField
control={form.control}
name="wecom.webhookUrl"
render={({ field }) => (
<FormItem>
<FormLabel>{t("wecom.webhookLabel")}</FormLabel>
<FormControl>
<Input
placeholder={t("wecom.webhookPlaceholder")}
{...field}
disabled={isLoading || updateMutation.isPending}
/>
</FormControl>
<FormDescription>
{t("wecom.webhookHelp")}
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
</CardContent>
)}
</Card>
</TabsContent>

View File

@@ -1,6 +1,6 @@
"use client"
import { WorkerList } from "@/components/settings/workers"
import { AgentList, ArchitectureDialog } from "@/components/settings/workers"
import { useTranslations } from "next-intl"
export default function WorkersPage() {
@@ -15,8 +15,9 @@ export default function WorkersPage() {
{t("description")}
</p>
</div>
<ArchitectureDialog />
</div>
<WorkerList />
<AgentList />
</div>
)
}

View File

@@ -3,12 +3,19 @@
import React from "react"
import { usePathname, useParams } from "next/navigation"
import Link from "next/link"
import { Target, LayoutDashboard, Package, Image, ShieldAlert, Settings } from "lucide-react"
import { Target, LayoutDashboard, Package, FolderSearch, Image as ImageIcon, ShieldAlert, Settings, HelpCircle } from "lucide-react"
import { Skeleton } from "@/components/ui/skeleton"
import { Tabs, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { Badge } from "@/components/ui/badge"
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip"
import { useTarget } from "@/hooks/use-targets"
import { useTranslations } from "next-intl"
import type { TargetDetail } from "@/types/target.types"
/**
* Target detail layout
@@ -34,6 +41,7 @@ export default function TargetLayout({
// Get primary navigation active tab
const getPrimaryTab = () => {
if (pathname.includes("/overview")) return "overview"
if (pathname.includes("/directories")) return "directories"
if (pathname.includes("/screenshots")) return "screenshots"
if (pathname.includes("/vulnerabilities")) return "vulnerabilities"
if (pathname.includes("/settings")) return "settings"
@@ -42,8 +50,7 @@ export default function TargetLayout({
pathname.includes("/websites") ||
pathname.includes("/subdomain") ||
pathname.includes("/ip-addresses") ||
pathname.includes("/endpoints") ||
pathname.includes("/directories")
pathname.includes("/endpoints")
) {
return "assets"
}
@@ -56,7 +63,6 @@ export default function TargetLayout({
if (pathname.includes("/subdomain")) return "subdomain"
if (pathname.includes("/ip-addresses")) return "ip-addresses"
if (pathname.includes("/endpoints")) return "endpoints"
if (pathname.includes("/directories")) return "directories"
return "websites"
}
@@ -68,6 +74,7 @@ export default function TargetLayout({
const primaryPaths = {
overview: `${basePath}/overview/`,
assets: `${basePath}/websites/`, // Default to websites when clicking assets
directories: `${basePath}/directories/`,
screenshots: `${basePath}/screenshots/`,
vulnerabilities: `${basePath}/vulnerabilities/`,
settings: `${basePath}/settings/`,
@@ -78,22 +85,22 @@ export default function TargetLayout({
subdomain: `${basePath}/subdomain/`,
"ip-addresses": `${basePath}/ip-addresses/`,
endpoints: `${basePath}/endpoints/`,
directories: `${basePath}/directories/`,
}
// Get counts for each tab from target data
const targetSummary = (target as TargetDetail | undefined)?.summary
const counts = {
subdomain: (target as any)?.summary?.subdomains || 0,
endpoints: (target as any)?.summary?.endpoints || 0,
websites: (target as any)?.summary?.websites || 0,
directories: (target as any)?.summary?.directories || 0,
vulnerabilities: (target as any)?.summary?.vulnerabilities?.total || 0,
"ip-addresses": (target as any)?.summary?.ips || 0,
screenshots: (target as any)?.summary?.screenshots || 0,
subdomain: targetSummary?.subdomains || 0,
endpoints: targetSummary?.endpoints || 0,
websites: targetSummary?.websites || 0,
directories: targetSummary?.directories || 0,
vulnerabilities: targetSummary?.vulnerabilities?.total || 0,
"ip-addresses": targetSummary?.ips || 0,
screenshots: targetSummary?.screenshots || 0,
}
// Calculate total assets count
const totalAssets = counts.websites + counts.subdomain + counts["ip-addresses"] + counts.endpoints + counts.directories
const totalAssets = counts.websites + counts.subdomain + counts["ip-addresses"] + counts.endpoints
// Loading state
if (isLoading) {
@@ -161,56 +168,82 @@ export default function TargetLayout({
</div>
{/* Primary navigation */}
<div className="px-4 lg:px-6">
<Tabs value={getPrimaryTab()}>
<TabsList>
<TabsTrigger value="overview" asChild>
<Link href={primaryPaths.overview} className="flex items-center gap-1.5">
<LayoutDashboard className="h-4 w-4" />
{t("tabs.overview")}
</Link>
</TabsTrigger>
<TabsTrigger value="assets" asChild>
<Link href={primaryPaths.assets} className="flex items-center gap-1.5">
<Package className="h-4 w-4" />
{t("tabs.assets")}
{totalAssets > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{totalAssets}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="screenshots" asChild>
<Link href={primaryPaths.screenshots} className="flex items-center gap-1.5">
<Image className="h-4 w-4" />
{t("tabs.screenshots")}
{counts.screenshots > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.screenshots}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="vulnerabilities" asChild>
<Link href={primaryPaths.vulnerabilities} className="flex items-center gap-1.5">
<ShieldAlert className="h-4 w-4" />
{t("tabs.vulnerabilities")}
{counts.vulnerabilities > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.vulnerabilities}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="settings" asChild>
<Link href={primaryPaths.settings} className="flex items-center gap-1.5">
<Settings className="h-4 w-4" />
{t("tabs.settings")}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
<div className="flex items-center justify-between px-4 lg:px-6">
<div className="flex items-center gap-3">
<Tabs value={getPrimaryTab()}>
<TabsList>
<TabsTrigger value="overview" asChild>
<Link href={primaryPaths.overview} className="flex items-center gap-1.5">
<LayoutDashboard className="h-4 w-4" />
{t("tabs.overview")}
</Link>
</TabsTrigger>
<TabsTrigger value="assets" asChild>
<Link href={primaryPaths.assets} className="flex items-center gap-1.5">
<Package className="h-4 w-4" />
{t("tabs.assets")}
{totalAssets > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{totalAssets}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="directories" asChild>
<Link href={primaryPaths.directories} className="flex items-center gap-1.5">
<FolderSearch className="h-4 w-4" />
{t("tabs.directories")}
{counts.directories > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.directories}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="screenshots" asChild>
<Link href={primaryPaths.screenshots} className="flex items-center gap-1.5">
<ImageIcon className="h-4 w-4" />
{t("tabs.screenshots")}
{counts.screenshots > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.screenshots}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="vulnerabilities" asChild>
<Link href={primaryPaths.vulnerabilities} className="flex items-center gap-1.5">
<ShieldAlert className="h-4 w-4" />
{t("tabs.vulnerabilities")}
{counts.vulnerabilities > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.vulnerabilities}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="settings" asChild>
<Link href={primaryPaths.settings} className="flex items-center gap-1.5">
<Settings className="h-4 w-4" />
{t("tabs.settings")}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
{getPrimaryTab() === "directories" && (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<HelpCircle className="h-4 w-4 text-muted-foreground cursor-help" />
</TooltipTrigger>
<TooltipContent side="right" className="max-w-sm">
{t("directoriesHelp")}
</TooltipContent>
</Tooltip>
</TooltipProvider>
)}
</div>
</div>
{/* Secondary navigation (only for assets) */}
@@ -220,7 +253,7 @@ export default function TargetLayout({
<TabsList variant="underline">
<TabsTrigger value="websites" variant="underline" asChild>
<Link href={secondaryPaths.websites} className="flex items-center gap-0.5">
Websites
{t("tabs.websites")}
{counts.websites > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.websites}
@@ -230,7 +263,7 @@ export default function TargetLayout({
</TabsTrigger>
<TabsTrigger value="subdomain" variant="underline" asChild>
<Link href={secondaryPaths.subdomain} className="flex items-center gap-0.5">
Subdomains
{t("tabs.subdomains")}
{counts.subdomain > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.subdomain}
@@ -240,7 +273,7 @@ export default function TargetLayout({
</TabsTrigger>
<TabsTrigger value="ip-addresses" variant="underline" asChild>
<Link href={secondaryPaths["ip-addresses"]} className="flex items-center gap-0.5">
IPs
{t("tabs.ips")}
{counts["ip-addresses"] > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts["ip-addresses"]}
@@ -250,7 +283,7 @@ export default function TargetLayout({
</TabsTrigger>
<TabsTrigger value="endpoints" variant="underline" asChild>
<Link href={secondaryPaths.endpoints} className="flex items-center gap-0.5">
URLs
{t("tabs.urls")}
{counts.endpoints > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.endpoints}
@@ -258,16 +291,6 @@ export default function TargetLayout({
)}
</Link>
</TabsTrigger>
<TabsTrigger value="directories" variant="underline" asChild>
<Link href={secondaryPaths.directories} className="flex items-center gap-0.5">
Directories
{counts.directories > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.directories}
</Badge>
)}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
</div>

View File

@@ -1,9 +1,19 @@
"use client"
import { useEffect, useMemo, useState } from "react"
import Editor from "@monaco-editor/react"
import dynamic from "next/dynamic"
import Link from "next/link"
import { useParams } from "next/navigation"
// Dynamic import Monaco Editor to reduce bundle size (~2MB)
const Editor = dynamic(() => import("@monaco-editor/react"), {
ssr: false,
loading: () => (
<div className="flex items-center justify-center h-full">
<div className="text-sm text-muted-foreground">Loading editor...</div>
</div>
),
})
import {
ChevronDown,
ChevronRight,
@@ -12,7 +22,6 @@ import {
ArrowLeft,
Search,
RefreshCw,
AlertTriangle,
Tag,
User,
} from "lucide-react"
@@ -99,7 +108,7 @@ export default function NucleiRepoDetailPage() {
const numericRepoId = repoId ? Number(repoId) : null
const { data: tree, isLoading, isError } = useNucleiRepoTree(numericRepoId)
const { data: templateContent, isLoading: isLoadingContent } = useNucleiRepoContent(numericRepoId, selectedPath)
const { data: templateContent } = useNucleiRepoContent(numericRepoId, selectedPath)
const { data: repoDetail } = useNucleiRepo(numericRepoId)
const refreshMutation = useRefreshNucleiRepo()
@@ -160,7 +169,7 @@ export default function NucleiRepoDetailPage() {
} else {
setEditorValue("")
}
}, [templateContent?.path])
}, [templateContent])
const toggleFolder = (path: string) => {
setExpandedPaths((prev) =>
@@ -248,7 +257,7 @@ export default function NucleiRepoDetailPage() {
}
}}
className={cn(
"flex w-full items-center gap-1.5 rounded-md px-2 py-1.5 text-left text-sm transition-colors",
"tree-node-item flex w-full items-center gap-1.5 rounded-md px-2 py-1.5 text-left text-sm transition-colors",
isFolder && "font-medium",
isActive
? "bg-primary/10 text-primary"

View File

@@ -12,7 +12,6 @@ import { useTranslations } from "next-intl"
*/
export default function ToolsPage() {
const t = useTranslations("pages.tools")
const tCommon = useTranslations("common")
// Feature modules
const modules = [

Some files were not shown because too many files have changed in this diff Show More