Compare commits

...

328 Commits

Author SHA1 Message Date
yyhuni
b859fc9062 refactor(modules): 更新模块路径为新的github用户命名空间
- 修改所有import路径,从github.com/orbit/server改为github.com/yyhuni/orbit/server
- 更新go.mod模块名为github.com/yyhuni/orbit/server
- 调整内部引用路径,确保包导入一致性
- 修改.gitignore,新增AGENTS.md和WARP.md忽略规则
- 更新Scan请求中engineNames字段的绑定规则,改为必须且仅能包含一个元素
2026-01-23 18:31:54 +08:00
yyhuni
49b5fbef28 chore(docs): 删除冗余项目文档文件
- 移除了 AGENTS.md 文件,简化项目文档结构
- 移除了 WARP.md 文件,删除重复的操作指南
- 清理文档目录,减少维护负担
- 优化项目根目录内容,提升整体整洁性
2026-01-23 09:39:46 +08:00
yyhuni
11112a68f6 Remove .hypothesis, .DS_Store and log files from version control 2026-01-23 09:31:47 +08:00
yyhuni
9049b096ba Remove .venv and .kiro directories from version control 2026-01-23 09:29:51 +08:00
yyhuni
ca6c0eb082 Remove .kiro directory from version control 2026-01-23 09:28:09 +08:00
yyhuni
64bcd9a6f5 忽略 2026-01-23 09:20:32 +08:00
yyhuni
443e2172e4 忽略ai文件 2026-01-23 09:17:26 +08:00
yyhuni
c6dcfb0a5b Remove specs directory from version control 2026-01-23 09:14:46 +08:00
yyhuni
25ae325c69 Remove AI assistant directories from version
control
2026-01-23 09:12:21 +08:00
yyhuni
cab83d89cf chore(.agent,.gemini,.github): remove duplicate vercel-react-best-practices skills
- Remove vercel-react-best-practices skill directory from .agent/skills
- Remove vercel-react-best-practices skill directory from .gemini/skills
- Remove vercel-react-best-practices skill directory from .github/skills
- Eliminate redundant skill definitions across multiple agent configurations
- Consolidate skill management to reduce maintenance overhead
2026-01-22 22:46:59 +08:00
yyhuni
0f8fff2dc4 chore(.claude): reorganize Claude commands and skills structure
- Add speckit command suite (.claude/commands/) for workflow automation
- Reorganize Vercel React best practices skills with improved structure
- Add Hypothesis testing constants database
- Remove .dockerignore and .gitignore from repository
- Add .DS_Store to tracked files
- Consolidate development tooling and AI assistant configuration for improved project workflow
2026-01-22 22:46:31 +08:00
yyhuni
6e48b97dc2 chore(.specify): add project constitution and development workflow scripts
- Add constitution.md template for documenting core principles and governance
- Add check-prerequisites.sh script for unified prerequisite validation
- Add common.sh utility functions for bash scripts
- Add create-new-feature.sh script for feature scaffolding
- Add setup-plan.sh script for implementation planning
- Add update-agent-context.sh script for agent context management
- Add agent-file-template.md for standardized agent documentation
- Add checklist-template.md for task tracking
- Add plan-template.md for implementation planning
- Add spec-template.md for feature specifications
- Add tasks-template.md for task breakdown
- Update scan history components with improved data handling and UI consistency
- Update scan types and mock data for enhanced scan tracking
- Update i18n messages for scan history localization
- Establish standardized development workflow and documentation structure
2026-01-22 08:56:22 +08:00
yyhuni
ed757d6e14 feat(engineschema): add JSON schema validation and migrate subdomain discovery schema
- Add new engineschema package with schema validation utilities for engine configs
- Implement Validate() function to validate config maps against JSON schemas
- Implement ValidateYAML() function to validate YAML blobs with nested engine support
- Add schema caching with mutex synchronization for performance
- Migrate subdomain_discovery.schema.json from server/configs/engines to server/internal/engineschema
- Enhance schema with $id, x-engine, and x-engine-version metadata fields
- Add conditional validation (if/then) for bruteforce tool enabled state
- Add additionalProperties: false constraints to enforce strict schema validation
- Add jsonschema/v5 dependency to server and worker modules
- Update schema-gen tool to generate schemas in new location
- Regenerate subdomain discovery schema with enhanced validation rules
- Update documentation generation timestamp
2026-01-21 22:00:23 +08:00
yyhuni
2aa1afbabf chore(docker): add server Dockerfile and update subdomain discovery paths
- Add new server/Dockerfile for Go backend containerization with multi-stage build
- Update docker-compose.dev.yml to include server service with database and Redis dependencies
- Migrate Sublist3r tool path from /usr/local/share to /opt/orbit-tools/share for consistency
- Add legacy notice to docker/worker/Dockerfile clarifying it's for old Python executor
- Update subdomain discovery documentation with RFC3339 timestamp format
- Update template parsing test to reflect new tool path location
- Consolidate development environment configuration with all required services
2026-01-21 10:38:57 +08:00
yyhuni
35ac64db57 Merge branch 'feature/directory-sorting-demo' into feature/go-backend
Integrate frontend refactoring changes including:
- Dashboard animation optimizations
- Login flow enhancements
- Orbit rebranding updates
2026-01-20 21:24:52 +08:00
yyhuni
b4bfab92e3 fix(doc-gen): update timestamp format to RFC3339 standard
- Change timestamp format from "2006-01-02" to time.RFC3339 constant
- Ensures generated documentation includes full ISO 8601 timestamp with timezone
- Improves consistency with standard time formatting practices
2026-01-20 21:18:53 +08:00
yyhuni
72210c42d0 style(worker): format subdomain discovery constants and reorder tool definitions
- Align constant assignments with consistent spacing for improved readability
- Reorder tool name constants alphabetically for better maintainability
- Move toolSubfinder constant to end of tool definitions list
- Standardize formatting across stage and tool constant declarations
2026-01-20 21:15:08 +08:00
yyhuni
91aaf7997f feat(worker): implement workflow code generation and enhance subdomain discovery
- Add code generation tools (const-gen, doc-gen, schema-gen) to automate workflow metadata and documentation
- Implement config key mapper for dynamic template parameter mapping and validation
- Add comprehensive test coverage for command builder, template loader, and runner components
- Enhance subdomain discovery workflow with recon stage replacing passive stage for better reconnaissance
- Add subdomain result parsing and writing utilities for output handling
- Implement batch sender tests and improve server client reliability
- Add CI workflow to validate generated files are up to date before builds
- Convert YAML engine config to JSON schema for better validation and IDE support
- Add extensive test data fixtures for template validation edge cases
- Update Makefile and development scripts for improved build and test workflows
- Generate auto-documentation for subdomain discovery configuration reference
- Improve code maintainability through automated generation of constants and schemas
2026-01-20 21:09:55 +08:00
yyhuni
32e3179d58 refactor(frontend): optimize dashboard animations and extract dashboard data prefetch logic
- Remove "use client" directive from dashboard page and convert to server component
- Replace manual fade-in animation state with CSS animation class `animate-dashboard-fade-in`
- Extract dashboard data prefetch logic into reusable `prefetchDashboardData` callback in login page
- Parallelize login verification and bundle prefetch operations for faster execution
- Implement dynamic import for Monaco Editor with loading state to reduce bundle size (~2MB)
- Fix dependency array in template content effect to include full `templateContent` object
- Add `tree-node-item` class with `content-visibility: auto` for long list rendering optimization
- Simplify login flow by reusing extracted prefetch function to reduce code duplication
- Improves perceived performance by reducing animation overhead and optimizing bundle loading
2026-01-20 08:42:02 +08:00
yyhuni
487f7c84b5 fix(frontend): add null checks to PixelBlast renderer initialization
- Add renderer null check in setSize function to prevent errors during initialization
- Add renderer validation before composer.setSize call to ensure renderer exists
- Add null check in mapToPixels function to return safe default values when renderer is unavailable
- Add renderer existence check before calling renderer.render in animation loop
- Improve robustness of Three.js renderer lifecycle management to prevent runtime errors
2026-01-20 08:02:01 +08:00
yyhuni
b2cc83f569 feat(frontend): optimize login flow with dashboard data preloading and enhanced animations
- Implement dashboard data prefetching on successful login to eliminate loading delays
- Add blur transition effect to dashboard fade-in animation for smoother visual experience
- Replace login success splash screen logic with efficient data warming strategy
- Prefetch critical dashboard queries (asset statistics, scans, vulnerabilities) before navigation
- Prime auth cache to prevent full-screen loading state on dashboard entry
- Add pixel animation first-frame detection to coordinate boot splash timing
- Refactor login state management to use refs and callbacks for better control flow
- Update dashboard page transition to use will-change optimization for better performance
- Remove hardcoded login success delay constants in favor of data-driven navigation
- Improve user experience by seamlessly transitioning from login to fully-loaded dashboard
2026-01-19 23:49:16 +08:00
yyhuni
f854cf09be feat(frontend): add login success splash screen and dashboard fade-in animation
- Add "use client" directive to dashboard page for client-side state management
- Implement fade-in animation on dashboard page load using opacity transition
- Add success state tracking to login page with configurable delay timers
- Create separate boot screen output for successful authentication flow
- Add success prop to LoginBootScreen component to display auth success messages
- Define new constants for login success delay (1200ms) and fade duration (500ms)
- Update boot screen to conditionally render success or standard boot lines
- Enhance user experience with visual feedback during authentication completion
2026-01-19 22:31:16 +08:00
yyhuni
7e1c2c187a chore(skills): add Vercel React best practices guidelines for agents
- Add comprehensive Vercel React best practices skill documentation across .agent, .codex, and .gemini directories
- Include 50+ rule files covering async patterns, bundle optimization, client-side performance, and server-side rendering
- Add SKILL.md and AGENTS.md metadata files for skill configuration and agent integration
- Organize rules into categories: advanced patterns, async handling, bundle optimization, client optimization, JavaScript performance, rendering optimization, re-render prevention, and server-side caching
- Provide standardized guidelines for performance optimization and best practices across multiple AI agent platforms
2026-01-19 20:14:08 +08:00
yyhuni
4abb259ca0 feat(frontend): rebrand to Orbit and add login boot splash screen
- Replace "Star Patrol ASM Platform" branding with "Orbit ASM Platform" throughout
- Add SVG icon support and remove favicon.ico in favor of icon.svg
- Create new LoginBootScreen component for boot splash animation
- Implement boot phase state management (entering, visible, leaving, hidden)
- Add smooth transitions and animations for login page overlay
- Update metadata icons configuration to use SVG format
- Add glitch reveal animations and jitter effects to globals.css
- Enhance login page UX with minimum splash duration and fade transitions
- Update English and Chinese translations for new branding
- Improve system logs mock data structure
- Update package.json dependencies and configuration
- Ensure splash screen displays before auth check completes and redirect occurs
2026-01-19 11:10:02 +08:00
yyhuni
bbef6af000 fix(frontend): update filter examples to use correct wildcard syntax
- Replace wildcard patterns with asterisks (*) with trailing slash notation
- Update directories filter example from "/api/*" to "/api/"
- Update endpoints filter example from "/api/*" to "/api/"
- Update IP addresses filter example from "192.168.1.*" to "192.168.1."
- Update subdomains filter example from "*.test.com" to ".test.com"
- Update vulnerabilities filter example from "/api/*" to "/api/"
- Update websites filter example from "/api/*" to "/api/"
- Standardize filter syntax across all data table components for consistency
2026-01-18 21:41:30 +08:00
yyhuni
ba0864ed16 feat(target): add help tooltip for directories tab and update translations
- Import HelpCircle icon from lucide-react for help indicator
- Import Tooltip components for displaying contextual help information
- Restructure navigation layout to support help tooltip alongside tabs
- Add conditional tooltip display when directories tab is active
- Add directoriesHelp translation key to English messages
- Add directoriesHelp translation key to Chinese messages
- Improve UX by providing contextual guidance for directories functionality
2026-01-18 10:23:33 +08:00
yyhuni
f54827829a feat(dashboard): add vulnerability review status tracking and severity column
- Add review status indicator (pending/reviewed) to recent vulnerabilities table with visual badges
- Display severity column in vulnerability table for better visibility
- Import Circle and CheckCircle2 icons from lucide-react for status indicators
- Add tooltip translations for "reviewed" and "pending" status labels
- Update mock vulnerability data with isReviewed property for all entries
- Implement conditional styling for pending (blue) and reviewed (muted) status badges
- Enhance table layout to show vulnerability severity alongside review status
2026-01-18 08:58:18 +08:00
yyhuni
170021130c feat(worker): implement subdomain discovery workflow stages with wildcard detection
- Add stage_bruteforce.go with bruteforce subdomain enumeration logic
- Add stage_passive.go with passive reconnaissance stage implementation
- Add stage_merge.go with file merging and wildcard domain detection
- Add stages.go with stage orchestration and utility functions
- Update workflow.go to integrate new stages into discovery pipeline
- Implement wildcard detection with sampling and expansion ratio analysis
- Add deduplication logic during file merging to reduce redundant entries
- Implement parallel command execution for bruteforce operations
- Add wordlist management with local caching from server
- Include comprehensive logging and error handling throughout stages
2026-01-17 23:18:28 +08:00
yyhuni
b540f69152 feat(worker): implement subdomain discovery workflow and enhance validation
- Rename IsSubdomainMatchTarget to IsSubdomainOfTarget for clarity
- Add subdomain discovery workflow with template loader and helpers
- Implement workflow registry for managing scan workflows
- Add domain validator package for input validation
- Create wordlist server component for managing DNS resolver lists
- Add template loader activity for dynamic template management
- Implement worker configuration module with environment setup
- Update worker dependencies to include projectdiscovery/utils and govalidator
- Consolidate workspace directory configuration (WORKSPACE_DIR replaces RESULTS_BASE_PATH)
- Update seed generator to use standardized bulk-create API endpoint
- Update all service layer calls to use renamed validation function
2026-01-17 21:15:02 +08:00
yyhuni
d7f1e04855 chore: add server/.env to .gitignore and remove from git tracking 2026-01-17 08:25:45 +08:00
yyhuni
68ad18e6da 更名orbit 2026-01-16 09:03:20 +08:00
yyhuni
a7542d4a34 改名后端成server 2026-01-15 16:19:00 +08:00
yyhuni
6f02d9f3c5 feat(api): standardize API endpoints and update data generation logic
- Rename IP address endpoints from `/ip-addresses/` to `/host-ports` for consistency
- Update vulnerability endpoints from `/assets/vulnerabilities/` to `/vulnerabilities/`
- Remove trailing slashes from API endpoint paths for standardization
- Remove explicit `type` field from target generation in seed data
- Enhance website generation with deduplication logic and attempt limiting
- Add default admin user seed data to database initialization migration
- Improve data generator to prevent infinite loops and handle unique URL combinations
- Align frontend service calls with updated backend API structure
2026-01-15 13:02:26 +08:00
yyhuni
794846ca7a feat(backend): enhance vulnerability schema and add target validation for snapshots
- Expand vulnerability and vulnerability_snapshot table column sizes for better data handling
* Change url column from VARCHAR(2000) to TEXT for unlimited length
* Increase vuln_type from VARCHAR(100) to VARCHAR(200)
* Increase source from VARCHAR(50) to VARCHAR(100)
- Add input validation constraints to vulnerability DTOs
* Add max=200 binding constraint to VulnType field
* Add max=100 binding constraint to Source field
- Implement consistent target ID validation across snapshot handlers
* Add ErrTargetMismatch error handling in subdomain_snapshot handler
* Add ErrTargetMismatch error handling in website_snapshot handler
* Replace generic error strings with ErrTargetMismatch constant in services
- Improve error handling consistency by using defined error types instead of generic error messages
2026-01-15 12:33:19 +08:00
yyhuni
5eea7b2621 feat(backend): add input validation and default value initialization for models
- Add Content-Type validation in BindJSON to enforce application/json requests
- Implement BeforeCreate hooks for array and JSONB field initialization across models
* Endpoint and EndpointSnapshot: initialize Tech and MatchedGFPatterns arrays
* Scan: initialize EngineIDs, ContainerIDs arrays and StageProgress JSONB
* Vulnerability and VulnerabilitySnapshot: initialize RawOutput JSONB
* Website and WebsiteSnapshot: initialize Tech array
- Add ErrTargetMismatch error handling in snapshot handlers
* DirectorySnapshot, HostPortSnapshot, ScreenshotSnapshot handlers now validate targetId matches scan's target
- Enhance target validation in filter and validator packages
- Improve service layer validation for subdomain, website, and host port snapshots
- Prevent null/nil values in database by ensuring proper default initialization
2026-01-15 12:21:35 +08:00
yyhuni
069527a7f1 feat(backend): implement vulnerability and screenshot snapshot APIs with directories tab reorganization
- Add vulnerability snapshot DTO, handler, repository, and service layer with comprehensive test coverage
- Add screenshot snapshot DTO, handler, repository, and service layer for snapshot management
- Reorganize directories tab from secondary assets navigation to primary navigation in scan history and target layouts
- Update frontend navigation to include FolderSearch icon for directories tab with badge count display
- Add i18n translations for directories tab in English and Chinese messages
- Implement seed data generation tools with Python API client for testing and data population
- Add data generator, error handler, and progress tracking utilities for seed API
- Update target validator to support new snapshot-related validations
- Refactor organization and vulnerability handlers to support snapshot operations
- Add integration tests and property-based tests for vulnerability snapshot functionality
- Update Go module dependencies to support new snapshot features
2026-01-15 10:25:34 +08:00
yyhuni
e542633ad3 refactor(backend): consolidate migration files and restructure host port entities
- Remove seed data generation command (cmd/seed/main.go)
- Consolidate database migrations into single init schema file
- Rename ip_address DTO to host_port for consistency
- Add host_port_snapshot DTO and model for snapshot tracking
- Rename host_port handler and repository files for clarity
- Implement host_port_snapshot service layer with CRUD operations
- Update website_snapshot service to work with new host_port structure
- Enhance terminal login UI with focus state tracking and Tab key navigation
- Update docker-compose configuration for development environment
- Refactor directory and website snapshot DTOs for improved data structure
- Add comprehensive test coverage for model and handler changes
- Simplify database schema by consolidating related migrations into single initialization file
2026-01-14 18:04:16 +08:00
yyhuni
e8a9606d3b 优化性能 2026-01-14 16:41:35 +08:00
yyhuni
dc2e1e027d 完成endpoint website subdomain directory快照表api 2026-01-14 16:38:20 +08:00
yyhuni
b1847faa3a feat(frontend): add throttled ripple effect on mouse move to PixelBlast
- Add enableRipples prop to PixelBlast component for conditional ripple control
- Implement throttled ripple triggering on pointer move events (150ms interval)
- Remove separate pointerdown event listener and consolidate ripple logic
- Refactor onPointerMove to handle both ripple effects and liquid touch separately
- Improve performance by preventing excessive ripple calculations on rapid movements
2026-01-14 11:42:12 +08:00
yyhuni
e699842492 perf(frontend): optimize login page and animations with memoization and accessibility
- Memoize translations object in login page to prevent unnecessary re-renders
- Add support for prefers-reduced-motion media query in PixelBlast component
- Implement IntersectionObserver and Page Visibility API for intelligent animation pausing
- Limit device pixel ratio based on device type (mobile vs desktop) for better performance
- Add maxPixelRatio parameter to PixelBlast for fine-grained performance control
- Add autoPlay prop to Shuffle component for flexible animation control
- Disable autoPlay on Shuffle text animations in terminal login for better UX
- Add accessibility label to PixelBlast container when reduced motion is enabled
- Improve mobile performance by capping pixel ratio to 1.5 on mobile devices
- Respect user accessibility preferences while maintaining visual quality on desktop
2026-01-14 11:33:11 +08:00
yyhuni
08a4807bef feat(frontend): enhance terminal login UI with improved styling and i18n shortcuts
- Update PixelBlast animation with increased pixel size (6.5) and speed (0.35)
- Replace semantic color tokens with explicit zinc color palette for better visual consistency
- Add keyboard shortcuts translations to support multiple languages (en, zh)
- Implement i18n for all terminal UI labels: submit, cancel, clear, start/end actions
- Update terminal header and content styling with zinc-700 borders and zinc-100 text
- Enhance keyboard shortcuts hint display with localized action labels
- Improve text color hierarchy using zinc-400, zinc-500, and zinc-600 variants
2026-01-14 10:58:12 +08:00
yyhuni
191ff9837b feat(frontend): redesign login page with terminal UI and pixel blast animation
- Replace traditional card-based login form with immersive terminal-style interface
- Add PixelBlast animated background component for cyberpunk aesthetic
- Implement TerminalLogin component with typewriter and terminal effects
- Add new animation components: FaultyTerminal, PixelBlast, Shuffle with CSS modules
- Add gravity-stars background animation component from animate-ui
- Add terminal cursor blink animation to global styles
- Update login page translations to support terminal UI prompts and messages
- Replace Lottie animation with dynamic WebGL-based PixelBlast component
- Add dynamic imports to prevent SSR issues with WebGL rendering
- Update component registry to include @magicui and @react-bits registries
- Refactor login form state management to use async/await pattern
- Add fingerprint meta tag for search engine identification (FOFA/Shodan)
- Improve visual hierarchy with relative z-index layering for background and content
2026-01-14 10:48:41 +08:00
yyhuni
679dff9037 refactor(frontend): unify filter UI components and enhance smart filtering
- Replace DropdownMenu with Select component for severity filtering across data tables
- Add Filter icon from lucide-react to filter triggers for consistent visual design
- Update SelectTrigger width from fixed pixels to auto for responsive layout
- Integrate SmartFilterInput component into vulnerabilities data table
- Refactor severity filter options to use object structure with translated labels
- Consolidate filter UI patterns across organization targets, scan history, and vulnerabilities tables
- Register @animate-ui component registry in components.json
- Improve filter UX with consistent icon usage and flexible sizing
2026-01-14 09:51:35 +08:00
yyhuni
ce4330b628 refactor(frontend): centralize severity styling configuration
- Extract severity color and style definitions into dedicated severity-config module
- Create SEVERITY_STYLES constant with unified badge styling for all severity levels
- Create SEVERITY_COLORS constant for chart visualization consistency
- Add getSeverityStyle() helper function for dynamic severity badge generation
- Add SEVERITY_CARD_STYLES and SEVERITY_ICON_BG constants for notification styling
- Update dashboard components to use centralized severity configuration
- Update fingerprint columns to use getSeverityStyle() helper
- Update notification drawer to reference centralized severity styles
- Update search result cards to use centralized configuration
- Update vulnerability components to import from severity-config module
- Eliminate duplicate severity styling definitions across multiple components
- Improve maintainability by having single source of truth for severity styling
2026-01-14 09:05:14 +08:00
yyhuni
4ce6b148f8 feat(frontend): enhance vulnerability review status display with icons
- Add Circle and CheckCircle2 icons from lucide-react for visual status indicators
- Update reviewStatus column sizing (100px size, 90-110px range) for better layout
- Implement icon rendering: Circle for pending status, CheckCircle2 for reviewed
- Enhance Badge styling with improved hover states and ring effects
- Add gap spacing between icon and text in status badge
- Refactor status logic to use isPending variable for clearer code
- Update Chinese translations for review action labels to be more descriptive
- Improve visual feedback with conditional styling based on review status
- Maintain cursor pointer behavior only when onToggleReview callback is available
2026-01-14 08:43:47 +08:00
yyhuni
a89f775ee9 完成漏洞的review,scan的基本curd 2026-01-14 08:21:46 +08:00
yyhuni
e3003f33f9 完成漏洞的review,scan的基本curd 2026-01-14 08:21:34 +08:00
yyhuni
3760684b64 feat: add vulnerability review status feature
- Add is_reviewed and reviewed_at fields to vulnerability table
- Add PATCH /api/vulnerabilities/:id/review and /unreview endpoints
- Add POST /api/vulnerabilities/bulk-review and /bulk-unreview endpoints
- Add isReviewed filter parameter to list APIs
- Update frontend with review status indicator, filter tabs, and bulk actions
- Add i18n translations for review status
2026-01-13 19:53:12 +08:00
yyhuni
bfd7e11d09 perf(backend): optimize database seeding with batch inserts
- Replace individual Create() calls with CreateInBatches() for organizations, targets, and websites to reduce database round trips
- Build all records in memory before batch insertion instead of inserting one-by-one
- Implement chunked batch insert for organization-target links to handle large datasets efficiently
- Add ON CONFLICT DO NOTHING clause for website creation to handle duplicates gracefully
- Use strings.Join() for efficient SQL query construction in bulk insert operations
- Improve seeding performance by reducing database transactions from O(n) to O(n/batch_size)
- Add missing imports (strings, clause) required for batch operations
2026-01-13 18:57:18 +08:00
yyhuni
f758feb0d0 完善漏洞api 2026-01-13 18:46:43 +08:00
yyhuni
8798eed337 feat(backend,frontend): implement wordlist management and engine patch endpoint
- Add wordlist management system with create, list, delete, and content operations
- Implement wordlist repository, service, and handler layers
- Add wordlist DTO models for API requests and responses
- Create wordlist storage configuration with base path setting
- Add PATCH endpoint for partial engine updates alongside existing PUT endpoint
- Implement PatchEngineRequest DTO for optional field updates
- Add wordlist routes: POST/GET/DELETE for management, GET/PUT for content operations
- Remove redundant toast notifications from engine edit dialog (handled by hook)
- Configure storage settings in application config with environment variable support
- Initialize wordlist service and handler in main server setup
2026-01-13 18:03:36 +08:00
yyhuni
bd1e25cfd5 添加截图创建校验 2026-01-13 17:42:19 +08:00
yyhuni
d775055572 完成截图api 2026-01-13 17:35:57 +08:00
yyhuni
00dfad60b8 完成target资产统计count 2026-01-13 16:55:37 +08:00
yyhuni
a5c48fe4d4 feat(frontend,backend): implement IP address management and export functionality
- Add IP address DTO, handler, service, and repository layers in Go backend
- Implement IP address bulk delete endpoint at /ip-addresses/bulk-delete/
- Add IP address export endpoint with optional IP filtering by target
- Simplify IP address hosts column display using ExpandableCell component
- Update IP address export to support filtering selected IPs for download
- Add error handling and toast notifications for export operations
- Internationalize IP address column labels and tooltips in Chinese
- Update IP address service to support filtered exports with comma-separated IPs
- Add host-port mapping seeding for test data generation
- Refactor scope filter and repository queries to support IP address operations
2026-01-13 16:42:57 +08:00
yyhuni
85c880731c feat(frontend): internationalize data table and website columns
- Add Chinese translations for common column labels (name, description, status, actions, type)
- Translate vulnerability column headers (severity, source, vulnType, url, createdAt)
- Translate organization and target column headers to Chinese
- Translate subdomain and endpoint column headers with full Chinese localization
- Add comprehensive website column translations including statusCode, technologies, contentLength
- Translate directory and scheduledScan column headers to Chinese
- Update UnifiedDataTable to use i18n for "Columns" button text via tDataTable("showColumns")
- Fix websites-view to use correct translation key "website.statusCode" instead of "common.status"
- Ensure consistent terminology across all data table views for better user experience
2026-01-13 10:16:43 +08:00
yyhuni
c6b6507412 feat(frontend): internationalize tab labels in scan history and target layouts
- Replace hardcoded tab labels with i18n translation keys in scan history layout
- "Websites" → {t("tabs.websites")}
- "Subdomains" → {t("tabs.subdomains")}
- "IPs" → {t("tabs.ips")}
- "URLs" → {t("tabs.urls")}
- "Directories" → {t("tabs.directories")}
- Replace hardcoded tab labels with i18n translation keys in target layout
- Apply same translation key replacements across all tab triggers
- Add new tab translation keys to English messages (en.json)
- tabs.websites, tabs.subdomains, tabs.ips, tabs.urls, tabs.directories
- Add new tab translation keys to Chinese messages (zh.json)
- Standardize terminology: "网站" → "站点", "端点" → "URL"
- Update related dashboard and stat card translations for consistency
- Ensures consistent multilingual support across scan history and target management interfaces
2026-01-13 09:58:34 +08:00
yyhuni
af457dc44c feat(frontend,backend): implement directory, endpoint, and subdomain management APIs
- Remove words, lines, and duration fields from directory model and UI components
- Simplify directory columns by removing unnecessary metrics from table display
- Add directory, endpoint, and subdomain DTOs with proper validation and pagination
- Implement handlers for directory, endpoint, and subdomain CRUD operations
- Create repository layer for directory, endpoint, and subdomain data access
- Add service layer for directory, endpoint, and subdomain business logic
- Update API routes to use standalone endpoints (/directories, /endpoints, /subdomains)
- Fix subdomain bulk-create payload to use 'names' field instead of 'subdomains'
- Add database migration to drop unused directory_words and directory_lines tables
- Update seed data generation to support websites, endpoints, and directories per target
- Add target validator tests for improved test coverage
- Refactor subdomain service to support new API structure
2026-01-13 09:47:34 +08:00
yyhuni
9e01a6aa5e fix(frontend,backend): move bulk-delete endpoint to standalone websites route
- Move bulk-delete endpoint from `/targets/:id/websites/bulk-delete` to `/websites/bulk-delete`
- Update frontend WebsiteService to use new standalone endpoint path
- Update Go backend router configuration to register bulk-delete under standalone websites routes
- Update handler documentation to reflect correct endpoint path
- Simplifies API structure by treating bulk operations as standalone website operations rather than target-scoped
2026-01-12 22:16:34 +08:00
yyhuni
ed80772e6f feat(frontend,backend): implement website management and i18n for bulk operations
- Add website service layer with CRUD operations and filtering support
- Implement website handler with complete API endpoints
- Add website repository with database operations and query optimization
- Create website DTO for API request/response serialization
- Implement CSV export functionality for asset data
- Add scope filtering package for dynamic query building with tests
- Create database migrations for schema initialization and GIN indexes
- Migrate bulk add dialog to use i18n translations instead of hardcoded strings
- Update all frontend hooks to support pagination and filtering parameters
- Refactor organization and target services with improved error handling
- Add seed command for database initialization with sample data
- Update frontend messages (en.json, zh.json) with bulk operation translations
- Improve API client with better error handling and request formatting
- Add database migration runner to backend initialization
- Update go.mod and go.sum with new dependencies
2026-01-12 22:10:08 +08:00
yyhuni
a22af21dcb feat(frontend,backend): optimize data fetching and add database seeding
- Add database seeding utility (cmd/seed/main.go) to generate test data for organizations and targets
- Implement conditional query execution in useTargets hook with enabled option to prevent unnecessary requests
- Reduce page size from 50 to 20 in scheduled scan dialog for better performance
- Update target DTO and handler to support improved query filtering
- Enhance target repository with optimized database queries
- Replace generic "Add" button text with localized "Add Target" text in target views
- Remove redundant addButtonText prop from organization detail view
- Improve code formatting and add explanatory comments for data fetching logic
- These changes reduce unnecessary API calls on page load and provide better test data management for development
2026-01-12 18:43:16 +08:00
yyhuni
8de950a7a5 feat(organization): refactor target creation flow and fix target count queries
- Replace useBatchCreateTargets hook with direct service call in AddOrganizationDialog to avoid double toast notifications
- Simplify dialog state management by using isCreatingTargets boolean instead of mutation pending state
- Consolidate form reset and dialog close logic to execute after both organization and targets are created
- Fix target count queries in OrganizationRepository to exclude soft-deleted targets using INNER JOIN with deleted_at check
- Update FindByIDWithCount and FindAll methods to properly filter out deleted targets from count calculations
- Handle 204 No Content responses in batchCreateTargets service by returning default success response
2026-01-12 18:17:44 +08:00
yyhuni
9db84221e9 完成部分组织,目标相关后端api
前端改名项目为星巡
2026-01-12 17:59:37 +08:00
yyhuni
0728f3c01d feat(go-backend): add database auto-migration and fix Website model naming
- Add comprehensive database auto-migration in main.go with all models organized by dependency order
- Include core models (Organization, User, Target, ScanEngine, WorkerNode, etc.)
- Include scan-related models (Scan, ScanInputTarget, ScanLog, ScheduledScan)
- Include asset models (Subdomain, HostPortMapping, Website, Endpoint, Directory, Screenshot, Vulnerability)
- Include snapshot models for all asset types
- Include statistics and authentication models
- Rename WebSite struct to Website for consistency with Go naming conventions
- Update TableName method to reflect Website naming
- Add migration logging for debugging and monitoring purposes
2026-01-11 22:30:36 +08:00
yyhuni
4aa7b3d68a feat(go-backend): implement complete API layer with handlers, services, and repositories
- Add DTOs for user, organization, target, engine, pagination, and response handling
- Implement repository layer for user, organization, target, and engine entities
- Implement service layer with business logic for all core modules
- Implement HTTP handlers for user, organization, target, and engine endpoints
- Add complete CRUD API routes with soft delete support for organizations and targets
- Add environment configuration file with database, Redis, and logging settings
- Add docker-compose.dev.yml for PostgreSQL and Redis development dependencies
- Add comprehensive README.md with migration progress, API endpoints, and tech stack
- Update main.go to wire repositories, services, and handlers with dependency injection
- Update config.go to support .env file loading with environment variable priority
- Update database.go to initialize all repositories and services
2026-01-11 22:07:27 +08:00
yyhuni
3946a53337 refactor(go-backend): switch from Django pbkdf2 to bcrypt
- Simplify password.go to use bcrypt (standard Go approach)
- Remove Django password compatibility (not needed for fresh deployment)
- Update auth_handler to use VerifyPassword()
- All tests passing
2026-01-11 20:58:53 +08:00
yyhuni
c94fe1ec4b feat(go-backend): implement JWT authentication
- Add JWT token generation and validation (internal/auth/jwt.go)
- Add Django-compatible password verification (internal/auth/password.go)
- Add auth middleware for protected routes (internal/middleware/auth.go)
- Add auth handler with login, refresh, me endpoints (internal/handler/auth_handler.go)
- Add JWT config (secret, access/refresh expire times)
- Register auth routes in main.go
- All tests passing

API endpoints:
- POST /api/auth/login - User login
- POST /api/auth/refresh - Refresh access token
- GET /api/auth/me - Get current user (protected)
2026-01-11 20:55:59 +08:00
yyhuni
6dea525527 feat(go-backend): add indexes and unique constraints to all models
- Add index tags for query optimization (idx_xxx)
- Add uniqueIndex tags for unique constraints
- Add composite unique indexes (e.g., unique_subdomain_name_target)
- Update Organization/Target to many-to-many relationship
- All models now ready for GORM AutoMigrate
- All tests passing
2026-01-11 20:47:25 +08:00
yyhuni
5b0416972a feat(go-backend): complete all Go models
- Add scan-related models: ScanLog, ScanInputTarget, ScheduledScan, SubfinderProviderSettings
- Add engine models: Wordlist, NucleiTemplateRepo
- Add notification models: Notification, NotificationSettings
- Add config model: BlacklistRule
- Add statistics models: AssetStatistics, StatisticsHistory
- Add auth models: User (auth_user), Session (django_session)
- Add shopspring/decimal dependency for Vulnerability model
- Update model_test.go with all 33 model table name tests
- All tests passing
2026-01-11 20:29:11 +08:00
yyhuni
5345a34cbd 重构:去除prefect 2026-01-11 19:31:47 +08:00
github-actions[bot]
3ca56abc3e chore: bump version to v1.5.12-dev 2026-01-11 09:22:30 +00:00
yyhuni
9703add22d feat(nuclei): support configurable Nuclei templates repository with Gitee mirror
- Add NUCLEI_TEMPLATES_REPO_URL setting to allow runtime configuration of template repository URL
- Refactor install.sh mirror parameter handling to use boolean flag instead of URL string
- Replace hardcoded GitHub repository URL with Gitee mirror option for faster downloads in mainland China
- Update environment variable configuration to persist Nuclei repository URL in .env file
- Improve shell script variable quoting and conditional syntax for better reliability
- Simplify mirror detection logic by using USE_MIRROR boolean flag throughout installation process
- Add support for automatic Gitee mirror selection when --mirror flag is enabled
2026-01-11 17:19:09 +08:00
github-actions[bot]
f5a489e2d6 chore: bump version to v1.5.11-dev 2026-01-11 08:54:04 +00:00
yyhuni
d75a3f6882 fix(task_distributor): adjust high load wait parameters and improve timeout handling
- Increase high load wait interval from 60 to 120 seconds (2 minutes)
- Increase max retries from 10 to 60 to support up to 2 hours total wait time
- Improve timeout message to show actual wait duration in minutes
- Remove duplicate return statement in worker selection logic
- Update notification message to reflect new wait parameters (2 minutes check interval, 2 hours max wait)
- Clean up trailing whitespace in task_distributor.py
- Remove redundant error message from install.sh about missing/incorrect image versions
- Better handling of high load scenarios with clearer logging and user communication
2026-01-11 16:41:05 +08:00
github-actions[bot]
59e48e5b15 chore: bump version to v1.5.10-dev 2026-01-11 08:19:39 +00:00
yyhuni
2d2ec93626 perf(screenshot): optimize memory usage and add URL collection fallback logic
- Add iterator(chunk_size=50) to ScreenshotSnapshot query to prevent BinaryField data caching and reduce memory consumption
- Implement fallback logic in URL collection: WebSite → HostPortMapping → Default URL with priority handling
- Update _collect_urls_from_provider to return tuple with data source information for better logging and debugging
- Add detailed logging to track which data source was used during URL collection
- Improve code documentation with clear return type hints and fallback priority explanation
- Prevents memory spikes when processing large screenshot datasets with binary image data
2026-01-11 16:14:56 +08:00
github-actions[bot]
ced9f811f4 chore: bump version to v1.5.8-dev 2026-01-11 08:09:37 +00:00
yyhuni
aa99b26f50 fix(vuln_scan): use tool-specific parameter names for endpoint scanning
- Add conditional logic to use "input_file" parameter for nuclei tool
- Use "endpoints_file" parameter for other scanning tools
- Improve compatibility with different vulnerability scanning tools
- Ensure correct parameter naming based on tool requirements
2026-01-11 15:59:39 +08:00
yyhuni
8342f196db nuclei加入website扫描为默认 2026-01-11 12:13:27 +08:00
yyhuni
1bd2a6ed88 重构:完成provider 2026-01-11 11:15:59 +08:00
yyhuni
033ff89aee 重构:采用provider提供数据 2026-01-11 10:29:27 +08:00
yyhuni
4284a0cd9a refactor(scan): remove deprecated provider implementations and cleanup
- Delete ListTargetProvider implementation and related tests
- Delete PipelineTargetProvider implementation and related tests
- Remove target_export_service.py unused service module
- Remove test files for common properties validation
- Update engine-preset-selector component in frontend
- Remove sponsor acknowledgment section from README
- Simplify provider architecture by consolidating implementations
2026-01-10 23:53:52 +08:00
yyhuni
943a4cb960 docs(docker): remove default credentials from startup message
- Remove hardcoded default username and password display from docker startup script
- Remove warning message about changing password after first login
- Improve security by not exposing default credentials in startup output
- Simplifies startup message output for cleaner user experience
2026-01-10 11:21:14 +08:00
yyhuni
eb2d853b76 docs: remove emoji symbols from README for better accessibility
- Remove shield emoji (🛡️) from main title
- Replace emoji prefixes in navigation links with plain text anchors
- Remove emoji icons from section headers (🌐, 📚, , 📦, 🤝, 📧, 🎁, , 🙏, ⚠️, 🌟, 📄)
- Replace emoji status indicators (, ⚠️, 🔍, 💡, ) with plain text equivalents
- Remove emoji bullet points and replace with standard formatting
- Simplify documentation for improved readability and cross-platform compatibility
2026-01-10 11:17:43 +08:00
github-actions[bot]
1184c18b74 chore: bump version to v1.5.7 2026-01-10 03:10:45 +00:00
yyhuni
8a6f1b6f24 feat(engine): add --force-sub flag for selective engine config updates
- Add --force-sub command flag to init_default_engine management command
- Allow updating only sub-engines while preserving user-customized full scan config
- Update docker/scripts/init-data.sh to always update full scan engine configuration
- Change docker/server/start.sh to use --force flag for initial engine setup
- Improve update.sh with better logging functions and formatted output
- Add color-coded log functions (log_step, log_ok, log_info, log_warn, log_error)
- Enhance update.sh UI with better visual formatting and warning messages
- Refactor error messages and user prompts for improved clarity
- This enables safer upgrades by preserving custom full scan configurations while updating sub-engines
2026-01-10 11:04:42 +08:00
yyhuni
255d505aba refactor(scan): remove deprecated amass engine configurations
- Remove amass_passive engine configuration from subdomain discovery defaults
- Remove amass_active engine configuration from subdomain discovery defaults
- Simplify engine configuration by eliminating unused amass-based scanners
- Streamline the default engine template for better maintainability
2026-01-10 10:51:07 +08:00
github-actions[bot]
d06a9bab1f chore: bump version to v1.5.7-dev 2026-01-10 02:48:21 +00:00
yyhuni
6d5c776bf7 chore: improve version detection and update deployment configuration
- Update version detection to support IMAGE_TAG environment variable for Docker containers
- Add fallback mechanism to check multiple version file paths (/app/VERSION and project root)
- Add IMAGE_TAG environment variable to docker-compose.dev.yml and docker-compose.yml
- Fix frontend access URL in start.sh to include correct port (8083)
- Update upgrade warning message in update.sh to recommend fresh installation with latest code
- Improve robustness of version retrieval with better error handling for missing files
2026-01-10 10:41:36 +08:00
github-actions[bot]
bf058dd67b chore: bump version to v1.5.6-dev 2026-01-10 02:33:15 +00:00
yyhuni
0532d7c8b8 feat(notifications): add WeChat Work (WeChat Enterprise) notification support
- Add wecom notification channel configuration to mock notification settings
- Initialize wecom with disabled state and empty webhook URL by default
- Update notification settings response to include wecom configuration
- Enable WeChat Work as an alternative notification channel alongside Discord
2026-01-10 10:29:33 +08:00
yyhuni
2ee9b5ffa2 更新版本 2026-01-10 10:27:48 +08:00
yyhuni
648a1888d4 增加企业微信 2026-01-10 10:16:01 +08:00
github-actions[bot]
2508268a45 chore: bump version to v1.5.4-dev 2026-01-10 02:10:05 +00:00
yyhuni
c60383940c 提供升级功能 2026-01-10 10:04:07 +08:00
yyhuni
47298c294a 性能优化 2026-01-10 09:44:49 +08:00
yyhuni
eba394e14e 优化:性能优化 2026-01-10 09:44:43 +08:00
yyhuni
592a1958c4 优化ui 2026-01-09 16:52:50 +08:00
yyhuni
38e2856c08 feat(scan): add provider abstraction layer for flexible target sourcing
- Add TargetProvider base class and ProviderContext for unified target acquisition
- Implement DatabaseTargetProvider for database-backed target queries
- Implement ListTargetProvider for in-memory target lists (fast scan phase 1)
- Implement SnapshotTargetProvider for snapshot table reads (fast scan phase 2+)
- Implement PipelineTargetProvider for pipeline stage outputs
- Add comprehensive provider tests covering common properties and individual providers
- Update screenshot_flow to support both legacy mode (target_id) and provider mode
- Add backward compatibility layer for existing task exports (directory, fingerprint, port, site, url_fetch, vuln scans)
- Add task backward compatibility tests
- Update .gitignore to exclude .hypothesis/ cache directory
- Update frontend ANSI log viewer component
- Update backend requirements.txt with new dependencies
- Enables flexible data source integration while maintaining backward compatibility with existing database-driven workflows
2026-01-09 09:02:09 +08:00
yyhuni
f5ad8e68e9 chore(backend): add hypothesis cache directory to gitignore
- Add .hypothesis/ directory to .gitignore to exclude Hypothesis property testing cache files
- Prevents test cache artifacts from being tracked in version control
- Improves repository cleanliness by ignoring generated test data
2026-01-08 11:58:49 +08:00
yyhuni
d5f91a236c Merge branch 'main' of https://github.com/yyhuni/xingrin 2026-01-08 10:37:32 +08:00
yyhuni
24ae8b5aeb docs: restructure README features section with capability tables
- Convert feature descriptions from nested lists to organized capability tables
- Add scanning capability table with tools and descriptions for each feature
- Add platform capability table highlighting core platform features
- Improve readability and scannability of feature documentation
- Maintain scanning pipeline architecture section for reference
- Simplify feature organization for better user comprehension
2026-01-08 10:35:56 +08:00
github-actions[bot]
86f43f94a0 chore: bump version to v1.5.3 2026-01-08 02:17:58 +00:00
yyhuni
53ba03d1e5 支持kali 2026-01-08 10:14:12 +08:00
github-actions[bot]
89c44ebd05 chore: bump version to v1.5.2 2026-01-08 00:20:11 +00:00
yyhuni
e0e3419edb chore(docker): improve worker dockerfile reliability with retry mechanism
- Add retry mechanism for apt-get install to handle ARM64 mirror sync delays
- Use --no-install-recommends flag to reduce image size and installation time
- Split apt-get update and install commands for better layer caching
- Add fallback installation logic for packages in case of initial failure
- Include explanatory comment about ARM64 ports.ubuntu.com potential delays
- Maintain compatibility with both ARM64 and AMD64 architectures
2026-01-08 08:14:24 +08:00
yyhuni
52ee4684a7 chore(docker): add apt-get update before playwright dependencies
- Add apt-get update before installing playwright chromium dependencies
- Ensures package lists are refreshed before installing system dependencies
- Prevents potential package installation failures in Docker builds
2026-01-08 08:09:21 +08:00
yyhuni
ce8cebf11d chore(frontend): update pnpm-lock.yaml with @radix-ui/react-hover-card
- Add @radix-ui/react-hover-card@1.1.15 package resolution entry
- Add package snapshot with all required dependencies and peer dependencies
- Update lock file to reflect new hover card component dependency
- Ensures consistent dependency management across the frontend environment
2026-01-08 07:57:58 +08:00
yyhuni
ec006d8f54 chore(frontend): add @radix-ui/react-hover-card dependency
- Add @radix-ui/react-hover-card v1.1.6 to project dependencies
- Enables hover card UI component functionality for improved user interactions
- Maintains consistency with existing Radix UI component library usage
2026-01-08 07:56:07 +08:00
yyhuni
48976a570f docs: update README with screenshot feature and sponsorship info
- Add screenshot feature documentation to features section with Playwright details
- Include WebP format compression benefits and multi-source URL support
- Add screenshot stage to scan flow architecture diagram with styling
- Add fingerprint library table with counts for public distribution
- Add sponsorship section with WeChat Pay and Alipay QR codes
- Add sponsor appreciation table
- Update frontend dependencies with @radix-ui/react-visually-hidden package
- Remove redundant installation speed note from mirror parameter documentation
- Clean up demo link formatting in online demo section
2026-01-08 07:54:31 +08:00
yyhuni
5da7229873 feat(scan-overview): add yaml configuration tab and improve logs layout
- Add yaml_configuration field to ScanHistorySerializer for backend exposure
- Implement tabbed interface with Logs and Configuration tabs in scan overview
- Add YamlEditor component to display scan configuration in read-only mode
- Refactor logs section to show status bar only when logs tab is active
- Move auto-refresh toggle to logs tab header for better UX
- Add padding to stage progress items for improved visual alignment
- Add internationalization strings for new UI elements (en and zh)
- Update ScanHistory type to include yamlConfiguration field
- Improve tab switching state management with activeTab state
2026-01-08 07:31:54 +08:00
yyhuni
8bb737a9fa feat(scan-history): add auto-refresh toggle and improve layout
- Add auto-refresh toggle switch to scan logs section for manual control
- Implement flexible polling based on auto-refresh state and scan status
- Restructure scan overview layout to use left-right split (stages + logs)
- Move stage progress to left column with vulnerability statistics
- Implement scrollable logs panel on right side with proper height constraints
- Update component imports to use Switch and Label instead of Button
- Add full-height flex layout to parent containers for proper scrolling
- Refactor grid layout from 2-column to fixed-width left + flexible right
- Update translations for new UI elements and labels
- Improve responsive design with better flex constraints and min-height handling
2026-01-07 23:30:27 +08:00
yyhuni
2d018d33f3 优化扫描历史详细页面 2026-01-07 22:44:46 +08:00
yyhuni
0c07cc8497 refactor(scan-flows): simplify logger calls by splitting multiline strings
- Split multiline logger.info() calls into separate single-line calls in initiate_scan_flow.py
- Improved log readability by removing string concatenation with newlines and separators
- Refactored 6 logger.info() calls across sequential, parallel, and completion stages
- Updated subdomain_discovery_flow.py to use consistent single-line logger pattern
- Maintains same log output while improving code maintainability and consistency
2026-01-07 22:21:50 +08:00
yyhuni
225b039985 style(system-logs): adjust log level filter dropdown width
- Increase SelectTrigger width from 100px to 130px for better label visibility
- Improve UI consistency in log toolbar component
- Prevent text truncation in log level filter dropdown
2026-01-07 22:17:07 +08:00
yyhuni
d1624627bc 一级tab加图标 2026-01-07 22:14:42 +08:00
yyhuni
7bb15e4ae4 增加:截图功能 2026-01-07 22:10:51 +08:00
github-actions[bot]
8e8cc29669 chore: bump version to v1.4.1 2026-01-07 01:33:29 +00:00
yyhuni
d6d5338acb 增加资产删除功能 2026-01-07 09:29:31 +08:00
yyhuni
c521bdb511 重构:回退逻辑 2026-01-07 08:45:27 +08:00
yyhuni
abf2d95f6f feat(targets): increase max batch size for target creation from 1000 to 5000
- Update MAX_BATCH_SIZE constant in BatchCreateTargetSerializer from 1000 to 5000
- Increase batch creation limit to support larger bulk operations
- Update documentation comment to reflect new limit
- Allows users to create up to 5000 targets in a single batch operation
2026-01-06 20:39:31 +08:00
github-actions[bot]
ab58cf0d85 chore: bump version to v1.4.0 2026-01-06 09:31:29 +00:00
yyhuni
fb0111adf2 Merge branch 'dev' 2026-01-06 17:27:35 +08:00
yyhuni
161ee9a2b1 Merge branch 'dev' 2026-01-06 17:27:16 +08:00
yyhuni
0cf75585d5 docs: 添加黑名单过滤功能说明到 README 2026-01-06 17:25:31 +08:00
yyhuni
1d8d5f51d9 feat(blacklist): add mock data and service integration for blacklist management
- Create new blacklist mock data module with global and target-specific patterns
- Add mock functions for getting and updating global blacklist rules
- Add mock functions for getting and updating target-specific blacklist rules
- Integrate mock blacklist endpoints into global-blacklist.service.ts
- Integrate mock blacklist endpoints into target.service.ts
- Export blacklist mock functions from main mock index
- Enable testing of blacklist management UI without backend API
2026-01-06 17:08:51 +08:00
github-actions[bot]
3f8de07c8c chore: bump version to v1.4.0-dev 2026-01-06 09:02:31 +00:00
yyhuni
cd5c2b9f11 chore(notifications): remove test notification endpoint
- Remove test notification route from URL patterns
- Delete notifications_test view function and associated logic
- Clean up unused test endpoint that was used for development purposes
- Simplify notification API surface by removing non-production code
2026-01-06 16:57:29 +08:00
yyhuni
54786c22dd feat(scan): increase max batch size for quick scan operations
- Increase MAX_BATCH_SIZE from 1000 to 5000 in QuickScanSerializer
- Allows processing of larger batch scans in a single operation
- Improves throughput for bulk scanning workflows
2026-01-06 16:55:28 +08:00
yyhuni
d468f975ab feat(scan): implement fallback chain for endpoint URL export
- Add fallback chain for URL data sources: Endpoint → WebSite → default generation
- Import WebSite model and Path utility for enhanced file handling
- Create output directory automatically if it doesn't exist
- Add "source" field to return value indicating data origin (endpoint/website/default)
- Update docstring to document the three-tier fallback priority system
- Implement sequential export attempts with logging at each fallback stage
- Improve error handling and data source transparency for endpoint exports
2026-01-06 16:30:42 +08:00
yyhuni
a85a12b8ad feat(asset): create incremental materialized views for asset search
- Add pg_ivm extension for incremental materialized view maintenance
- Create asset_search_view for Website model with optimized columns for full-text search
- Create endpoint_search_view for Endpoint model with matching search schema
- Add database indexes on host, url, title, status_code, and created_at columns for both views
- Enable high-performance asset search queries with automatic view maintenance
2026-01-06 16:22:24 +08:00
yyhuni
a8b0d97b7b feat(targets): update navigation routes and enhance add button UI
- Change target detail navigation route from `/website/` to `/overview/`
- Update TargetNameCell click handler to use new overview route
- Update TargetRowActions onView handler to use new overview route
- Add IconPlus icon import from @tabler/icons-react
- Add icon to create target button for improved visual clarity
- Improves navigation consistency and button affordance in targets table
2026-01-06 16:14:54 +08:00
yyhuni
b8504921c2 feat(fingerprints): add JSONL format support for Goby fingerprint imports
- Add support for JSONL format parsing in addition to standard JSON for Goby fingerprints
- Update GobyFingerprintService to validate both standard format (name/logic/rule) and JSONL format (product/rule)
- Implement _parse_json_content() method to handle both JSON and JSONL file formats with proper error handling
- Add JSONL parsing logic in frontend import dialog with per-line validation and error reporting
- Update file import endpoint documentation to indicate JSONL format support
- Improve error messages for encoding and parsing failures to aid user debugging
- Enable seamless import of Goby fingerprint data from multiple source formats
2026-01-06 16:10:14 +08:00
yyhuni
ecfc1822fb style(target): update vulnerability icon color to muted foreground
- Change ShieldAlert icon color from red-500 to muted-foreground in target overview
- Improves visual consistency with design system color palette
- Reduces visual emphasis on vulnerability section for better UI balance
2026-01-06 12:01:59 +08:00
github-actions[bot]
81633642e6 chore: bump version to v1.3.16-dev 2026-01-06 03:55:16 +00:00
yyhuni
d1ec9b7f27 feat(settings): add global blacklist management page and UI integration
- Add new global blacklist settings page with pattern management interface
- Create useGlobalBlacklist and useUpdateGlobalBlacklist React Query hooks for data fetching and mutations
- Implement global-blacklist.service.ts with API integration for blacklist operations
- Add Global Blacklist navigation item to app sidebar with Ban icon
- Add internationalization support for blacklist UI with English and Chinese translations
- Include pattern matching rules documentation (domain wildcards, keywords, IP addresses, CIDR ranges)
- Add loading states, error handling, and success/error toast notifications
- Implement textarea input with change tracking and save button state management
2026-01-06 11:50:31 +08:00
yyhuni
2a3d9b4446 feat(target): add initiate scan button and improve overview layout
- Add "Initiate Scan" button to target overview header with Play icon
- Implement InitiateScanDialog component integration for quick scan initiation
- Improve scheduled scans card layout with flexbox for better vertical spacing
- Reduce displayed scheduled scans from 3 to 2 items for better UI balance
- Enhance vulnerability statistics card styling with proper flex layout
- Add state management for scan dialog open/close functionality
- Update i18n translations (en.json, zh.json) with "initiateScan" label
- Refactor target info section to accommodate new action button with justify-between layout
- Improve empty state centering in scheduled scans card using flex layout
2026-01-06 11:10:47 +08:00
yyhuni
9b63203b5a refactor(migrations,frontend,backend): reorganize app structure and enhance target management UI
- Consolidate common migrations into dedicated common app module
- Remove asset search materialized view migration (0002) and simplify migration structure
- Reorganize target detail page with new overview and settings sub-routes
- Add target overview component displaying key asset information
- Add target settings component for configuration management
- Enhance scan history UI with improved data table and column definitions
- Update scheduled scan dialog with better form handling
- Refactor target service with improved API integration
- Update scan hooks (use-scans, use-scheduled-scans) with better state management
- Add internationalization strings for new target management features
- Update Docker initialization and startup scripts for new app structure
- Bump Django to 5.2.7 and update dependencies in requirements.txt
- Add WeChat group contact information to README
- Improve UI tabs component with better accessibility and styling
2026-01-06 10:42:38 +08:00
yyhuni
6ff86e14ec Update README.md 2026-01-06 09:59:55 +08:00
yyhuni
4c1282e9bb 完成黑名单后端逻辑 2026-01-05 23:26:50 +08:00
yyhuni
ba3a9b709d feat(system-logs): enhance ANSI log viewer with log level colorization
- Add LOG_LEVEL_COLORS configuration mapping for DEBUG, INFO, WARNING, WARN, ERROR, and CRITICAL levels
- Implement hasAnsiCodes() function to detect presence of ANSI escape sequences in log content
- Add colorizeLogContent() function to parse plain text logs and apply color styling based on log levels
- Support dual-mode log parsing: ANSI color codes and plain text log level detection
- Rename converter to ansiConverter for clarity and consistency
- Change newline handling from true to false for manual line break control
- Apply color-coded styling to timestamps (gray), log levels (level-specific colors), and messages
- Add bold font-weight styling for CRITICAL level logs for better visibility
2026-01-05 16:27:31 +08:00
github-actions[bot]
283b28b46a chore: bump version to v1.3.15-dev 2026-01-05 02:05:29 +00:00
yyhuni
1269e5a314 refactor(scan): reorganize models and serializers into modular structure
- Split monolithic models.py into separate model files (scan_models.py, scan_log_model.py, scheduled_scan_model.py, subfinder_provider_settings_model.py)
- Split monolithic serializers.py into separate serializer files with dedicated modules for each domain
- Add SubfinderProviderSettings model to store API key configurations for subfinder data sources
- Create SubfinderProviderConfigService to generate provider configuration files dynamically
- Add subfinder_provider_settings views and serializers for API key management
- Update subdomain_discovery_flow to support provider configuration file generation and passing to subfinder
- Update command templates to use provider config file and remove recursive flag for better source coverage
- Add frontend settings page for managing API keys at /settings/api-keys
- Add frontend hooks and services for API key settings management
- Update sidebar navigation to include API keys settings link
- Add internationalization support for new API keys settings UI (English and Chinese)
- Improves code maintainability by organizing related models and serializers into logical modules
2026-01-05 10:00:19 +08:00
yyhuni
802e967906 docs: add online demo link to README
- Add new "🌐 在线 Demo" section with live demo URL
- Include disclaimer note that demo is UI-only without backend database
- Improve documentation to help users quickly access and test the application
2026-01-04 19:19:33 +08:00
github-actions[bot]
e446326416 chore: bump version to v1.3.14 2026-01-04 11:02:14 +00:00
yyhuni
e0abb3ce7b Merge branch 'dev' 2026-01-04 18:57:49 +08:00
yyhuni
d418baaf79 feat(mock,scan): add comprehensive mock data and improve system load management
- Add mock data files for directories, fingerprints, IP addresses, notification settings, nuclei templates, search, system logs, tools, and wordlists
- Update mock index to export new mock data modules
- Increase SCAN_LOAD_CHECK_INTERVAL from 30 to 180 seconds for better system stability
- Improve load check logging message to clarify OOM prevention strategy
- Enhance mock data infrastructure to support frontend development and testing
2026-01-04 18:52:08 +08:00
github-actions[bot]
f8da408580 chore: bump version to v1.3.13-dev 2026-01-04 10:24:10 +00:00
yyhuni
7cd4354d8f feat(scan,asset): add scan logging system and improve search view architecture
- Add user_logger utility for structured scan operation logging
- Create scan log views and API endpoints for retrieving scan execution logs
- Add scan-log-list component and use-scan-logs hook for frontend log display
- Refactor asset search views to remove ArrayField support from pg_ivm IMMV
- Update search_service.py to JOIN original tables for array field retrieval
- Add system architecture requirements (AMD64/ARM64) to README
- Update scan flow handlers to integrate logging system
- Enhance scan progress dialog with log viewer integration
- Add ANSI log viewer component for formatted log display
- Update scan service API to support log retrieval endpoints
- Migrate database schema to support new logging infrastructure
- Add internationalization strings for scan logs (en/zh)
This change improves observability of scan operations and resolves pg_ivm limitations with ArrayField types by fetching array data from original tables via JOIN operations.
2026-01-04 18:19:45 +08:00
yyhuni
6bf35a760f chore(docker): configure Prefect home directory in worker image
- Add PREFECT_HOME environment variable pointing to /app/.prefect
- Create Prefect configuration directory to prevent home directory warnings
- Update step numbering in Dockerfile comments for clarity
- Ensures Prefect can properly initialize configuration without relying on user home directory
2026-01-04 10:39:11 +08:00
github-actions[bot]
be9ecadffb chore: bump version to v1.3.12-dev 2026-01-04 01:05:00 +00:00
yyhuni
adb53c9f85 feat(asset,scan): add configurable statement timeout and improve CSV export
- Add statement_timeout_ms parameter to search_service count() and stream_search() methods for long-running exports
- Replace server-side cursors with OFFSET/LIMIT batching for better Django compatibility
- Introduce create_csv_export_response() utility function to standardize CSV export handling
- Add engine-preset-selector and scan-config-editor components for enhanced scan configuration UI
- Update YAML editor component with improved styling and functionality
- Add i18n translations for new scan configuration features in English and Chinese
- Refactor CSV export endpoints to use new utility function instead of manual StreamingHttpResponse
- Remove unused uuid import from search_service.py
- Update nginx configuration for improved performance
- Enhance search service with configurable timeout support for large dataset exports
2026-01-04 08:58:31 +08:00
yyhuni
7b7bbed634 Update README.md 2026-01-03 22:15:35 +08:00
github-actions[bot]
8dd3f0536e chore: bump version to v1.3.11-dev 2026-01-03 11:54:31 +00:00
yyhuni
8a8062a12d refactor(scan): rename merged_configuration to yaml_configuration
- Rename `merged_configuration` field to `yaml_configuration` in Scan and ScheduledScan models for clarity
- Update all references across scan repositories, services, views, and serializers
- Update database migration to reflect field name change with improved help text
- Update frontend components to use new field naming convention
- Add YAML editor component for improved configuration editing in UI
- Update engine configuration retrieval in initiate_scan_flow to use new field name
- Remove unused asset tasks __init__.py module
- Simplify README feedback section for better clarity
- Update frontend type definitions and internationalization messages for consistency
2026-01-03 19:50:20 +08:00
yyhuni
55908a2da5 fix(asset,scan): improve decorator usage and dialog layout
- Fix transaction.non_atomic_requests decorator usage in AssetSearchExportView by wrapping with method_decorator for proper class-based view compatibility
- Update scan progress dialog to use flexible width (sm:max-w-fit sm:min-w-[450px]) instead of fixed width for better responsiveness
- Refactor engine names display from single Badge to grid layout with multiple badges for improved readability when multiple engines are present
- Add proper spacing and alignment adjustments (gap-4, items-start) to accommodate multi-line engine badge display
- Add text-xs and whitespace-nowrap to engine badges for consistent styling in grid layout
2026-01-03 18:46:44 +08:00
github-actions[bot]
22a7d4f091 chore: bump version to v1.3.10-dev 2026-01-03 10:45:32 +00:00
yyhuni
f287f18134 更新锁定镜像 2026-01-03 18:33:25 +08:00
yyhuni
de27230b7a 更新构建ci 2026-01-03 18:28:57 +08:00
github-actions[bot]
15a6295189 chore: bump version to v1.3.8-dev 2026-01-03 10:24:17 +00:00
yyhuni
674acdac66 refactor(asset): move database extension initialization to migrations
- Remove pg_trgm and pg_ivm extension setup from AssetConfig.ready() method
- Move extension creation to migration 0002 using RunSQL operations
- Add pg_trgm extension creation for text search index support
- Add pg_ivm extension creation for IMMV incremental maintenance
- Generate unique cursor names in search_service to prevent concurrent request conflicts
- Add @transaction.non_atomic_requests decorator to export view for server-side cursor compatibility
- Simplify app initialization by delegating extension setup to database migrations
- Improve thread safety and concurrency handling for streaming exports
2026-01-03 18:20:27 +08:00
github-actions[bot]
c59152bedf chore: bump version to v1.3.7-dev 2026-01-03 09:56:39 +00:00
yyhuni
b4037202dc feat: use registry cache for faster builds 2026-01-03 17:35:54 +08:00
yyhuni
4b4f9862bf ci(docker): add postgres image build configuration and update image tags
- Add xingrin-postgres image build job to docker-build workflow for multi-platform support (linux/amd64,linux/arm64)
- Update docker-compose.dev.yml to use IMAGE_TAG variable with dev as default fallback
- Update docker-compose.yml to use IMAGE_TAG variable with required validation
- Replace hardcoded postgres image tag (15) with dynamic IMAGE_TAG for better version management
- Enable flexible image tagging across development and production environments
2026-01-03 17:26:34 +08:00
github-actions[bot]
1c42e4978f chore: bump version to v1.3.5-dev 2026-01-03 08:44:06 +00:00
github-actions[bot]
57bab63997 chore: bump version to v1.3.3-dev 2026-01-03 05:55:07 +00:00
github-actions[bot]
b1f0f18ac0 chore: bump version to v1.3.4-dev 2026-01-03 05:54:50 +00:00
yyhuni
ccee5471b8 docs(readme): add notification push service documentation
- Add notification push service feature to visualization interface section
- Document support for real-time WeChat Work, Telegram, and Discord message push
- Enhance feature list clarity for notification capabilities
2026-01-03 13:34:36 +08:00
yyhuni
0ccd362535 优化下载逻辑 2026-01-03 13:32:58 +08:00
yyhuni
7f2af7f7e2 feat(search): add result export functionality and pagination limit support
- Add optional limit parameter to AssetSearchService.search() method for controlling result set size
- Implement AssetSearchExportView for exporting search results as CSV files with UTF-8 BOM encoding
- Add CSV export endpoint at GET /api/assets/search/export/ with configurable MAX_EXPORT_ROWS limit (10000)
- Support both website and endpoint asset types with type-specific column mappings in CSV export
- Format array fields (tech, matched_gf_patterns) and dates appropriately in exported CSV
- Update URL routing to include new search export endpoint
- Update views __init__.py to export AssetSearchExportView
- Add CSV generation with streaming response for efficient memory usage on large exports
- Update frontend search service to support export functionality
- Add internationalization strings for export feature in en.json and zh.json
- Update smart-filter-input and search-results-table components to support export UI
- Update installation and Docker startup scripts for deployment compatibility
2026-01-03 13:22:21 +08:00
yyhuni
4bd0f9e8c1 feat(search): implement dual-view IMMV architecture for website and endpoint assets
- Add incremental materialized view (IMMV) support for both Website and Endpoint asset types using pg_ivm extension
- Create asset_search_view IMMV with optimized indexes for host, title, url, headers, body, tech, status_code, and created_at fields
- Create endpoint_search_view IMMV with identical field structure and indexing strategy for endpoint-specific searches
- Extend search_service.py to support asset type routing with VIEW_MAPPING and VALID_ASSET_TYPES configuration
- Add comprehensive field mapping and array field definitions for both asset types
- Implement dual-query execution path in search views to handle website and endpoint searches independently
- Update frontend search components to support asset type filtering and result display
- Add search results table component with improved data presentation and filtering capabilities
- Update installation scripts and Docker configuration for pg_ivm extension deployment
- Add internationalization strings for new search UI elements in English and Chinese
- Consolidate index creation and cleanup logic in migrations for maintainability
- Enable automatic incremental updates on data changes without manual view refresh
2026-01-03 12:41:20 +08:00
yyhuni
68cc996e3b refactor(asset): standardize snapshot and asset model field naming and types
- Rename `status` to `status_code` in WebsiteSnapshotDTO for consistency
- Rename `web_server` to `webserver` in WebsiteSnapshotDTO for consistency
- Make `target_id` required field in EndpointSnapshotDTO and WebsiteSnapshotDTO
- Remove optional validation check for `target_id` in EndpointSnapshotDTO
- Convert CharField to TextField for url, location, title, webserver, and content_type fields in Endpoint and EndpointSnapshot models to support longer values
- Update migration 0001_initial.py to reflect field type changes from CharField to TextField
- Update all related services and repositories to use standardized field names
- Update serializers to map renamed fields correctly
- Ensure consistent field naming across DTOs, models, and database schema
2026-01-03 09:08:25 +08:00
github-actions[bot]
f1e79d638e chore: bump version to v1.3.2-dev 2026-01-03 00:33:26 +00:00
yyhuni
d484133e4c chore(docker): optimize server dockerfile with docker-ce-cli installation
- Replace full docker.io package with lightweight docker-ce-cli to reduce image size
- Add ca-certificates and gnupg dependencies for secure package management
- Improve Docker installation process for local Worker task distribution
- Reduce unnecessary dependencies in server container build
2026-01-03 08:09:03 +08:00
yyhuni
fc977ae029 chore(docker,frontend): optimize docker installation and add auth bypass config
- Replace docker.io installation script with apt-get package manager for better reliability
- Add NEXT_PUBLIC_SKIP_AUTH environment variable to Vercel config for development
- Improve Docker build layer caching by using native package manager instead of curl script
- Simplify frontend deployment configuration for local development workflows
2026-01-03 08:08:40 +08:00
yyhuni
f328474404 feat(frontend): add comprehensive mock data infrastructure for services
- Add mock data modules for auth, engines, notifications, scheduled-scans, and workers
- Implement mock authentication data with user profiles and login/logout responses
- Create mock scan engine configurations with multiple predefined scanning profiles
- Add mock notification system with various severity levels and categories
- Implement mock scheduled scan data with cron expressions and run history
- Add mock worker node data with status and performance metrics
- Update service layer to integrate with new mock data infrastructure
- Provide helper functions for filtering and paginating mock data
- Enable frontend development and testing without backend API dependency
2026-01-03 07:59:20 +08:00
yyhuni
68e726a066 chore(docker): update base image to python 3.10-slim-bookworm
- Update Python base image from 3.10-slim to 3.10-slim-bookworm
- Ensures compatibility with latest Debian stable release
- Improves security with updated system packages and dependencies
2026-01-02 23:19:09 +08:00
yyhuni
77a6f45909 fix:搜索的楼栋统计问题 2026-01-02 23:12:55 +08:00
yyhuni
49d1f1f1bb 采用ivm增量更新方案进行搜索 2026-01-02 22:46:40 +08:00
yyhuni
db8ecb1644 feat(search): add mock data infrastructure and vulnerability detail integration
- Add comprehensive mock data configuration for all major entities (dashboard, endpoints, organizations, scans, subdomains, targets, vulnerabilities, websites)
- Implement mock service layer with centralized config for development and testing
- Add vulnerability detail dialog integration to search results with lazy loading
- Enhance search result card with vulnerability viewing capability
- Update search materialized view migration to include vulnerability name field
- Implement default host fuzzy search fallback for bare text queries without operators
- Add vulnerability data formatting in search view for consistent API response structure
- Configure Vercel deployment settings and update Next.js configuration
- Update all service layers to support mock data injection for development environment
- Extend search types with improved vulnerability data structure
- Add internationalization strings for vulnerability loading errors
- Enable rapid frontend development and testing without backend API dependency
2026-01-02 19:06:09 +08:00
yyhuni
18cc016268 feat(search): implement advanced query parser with expression syntax support
- Add SearchQueryParser class to parse complex search expressions with operators (=, ==, !=)
- Support logical operators && (AND) and || (OR) for combining multiple conditions
- Implement field mapping for frontend to database field translation
- Add support for array field searching (tech stack) with unnest and ANY operators
- Support fuzzy matching (=), exact matching (==), and negation (!=) operators
- Add proper SQL injection prevention through parameterized queries
- Refactor search service to use expression-based filtering instead of simple filters
- Update search views to integrate new query parser
- Enhance frontend search hook and service to support new expression syntax
- Update search types to reflect new query structure
- Improve search page UI to display expression syntax examples and help text
- Enable complex multi-condition searches like: host="api" && tech="nginx" || status=="200"
2026-01-02 17:46:31 +08:00
yyhuni
23bc463283 feat(search): improve technology stack filtering with fuzzy matching
- Replace exact array matching with fuzzy search using ILIKE operator
- Update tech filter to search within array elements using unnest() and EXISTS
- Support partial technology name matching (e.g., "node" matches "nodejs")
- Apply consistent fuzzy matching logic across both search methods
- Enhance user experience by allowing flexible technology stack queries
2026-01-02 17:01:24 +08:00
yyhuni
7b903b91b2 feat(search): implement comprehensive search infrastructure with materialized views and pagination
- Add asset search service with materialized view support for optimized queries
- Implement search refresh service for maintaining up-to-date search indexes
- Create database migrations for AssetStatistics, StatisticsHistory, Directory, and DirectorySnapshot models
- Add PostgreSQL GIN indexes with trigram operators for full-text search capabilities
- Implement search pagination component with configurable page size and navigation
- Add search result card component with enhanced asset display formatting
- Create search API views with filtering and sorting capabilities
- Add use-search hook for client-side search state management
- Implement search service client for API communication
- Update search types with pagination metadata and result structures
- Add English and Chinese translations for search UI components
- Enhance scheduler to support search index refresh tasks
- Refactor asset views into modular search_views and asset_views
- Update URL routing to support new search endpoints
- Improve scan flow handlers for better search index integration
2026-01-02 16:57:54 +08:00
yyhuni
b3136d51b9 搜索页面前端UI设计完成 2026-01-02 10:07:26 +08:00
github-actions[bot]
08372588a4 chore: bump version to v1.2.15 2026-01-01 15:44:15 +00:00
yyhuni
236c828041 chore(fingerprints): remove deprecated ARL fingerprint rules
- Remove obsolete fingerprint detection rules from ARL.yaml
- Clean up legacy device and service signatures that are no longer maintained
- Reduce fingerprint database size by eliminating unused detection patterns
- Improve maintainability by removing outdated vendor-specific rules
2026-01-01 22:45:08 +08:00
yyhuni
fb13bb74d8 feat(filter): add array fuzzy search support with PostgreSQL array_to_string
- Add ArrayToString custom PostgreSQL function for converting arrays to delimited strings
- Implement array field annotation in QueryBuilder to support fuzzy matching on JSON array fields
- Enhance _build_single_q to handle three operators for JSON arrays: exact match (==), negation (!=), and fuzzy search (=)
- Update target navigation routes from subdomain to website view for consistency
- Enable fuzzy search on array fields by converting them to text during query building
2026-01-01 22:41:57 +08:00
yyhuni
f076c682b6 feat(scan): add multi-engine support and config merging with enhanced indexing
- Add multi-engine support to Scan model with engine_ids and engine_names fields
- Implement config_merger utility for merging multiple engine configurations
- Add merged_configuration property to Scan model for unified config access
- Update scan creation and scheduling services to handle multiple engines
- Add pg_trgm GIN indexes to asset and snapshot models for fuzzy search on url, title, and name fields
- Update scan views and serializers to support multi-engine selection and display
- Enhance frontend components for multi-engine scan initiation and scheduling
- Update test data generation script for multi-engine scan scenarios
- Add internationalization strings for multi-engine UI elements
- Refactor scan flow to use merged configuration instead of single engine config
- Update Docker compose files with latest configuration
2026-01-01 22:35:05 +08:00
yyhuni
9eda2caceb feat(asset): add response headers and body tracking with pg_trgm indexing
- Rename body_preview to response_body across endpoint and website models for consistency
- Change response_headers from Dict to string type for efficient text indexing
- Add pg_trgm PostgreSQL extension initialization in AssetConfig for GIN index support
- Update all DTOs to reflect response_body and response_headers field changes
- Modify repositories to handle new response_body and response_headers formats
- Update serializers and views to work with string-based response headers
- Add response_headers and response_body columns to frontend endpoint and website tables
- Update command templates and scan tasks to populate response_body and response_headers
- Add database initialization script for pg_trgm extension in PostgreSQL setup
- Update frontend types and translations for new field names
- Enable efficient full-text search on response headers and body content through GIN indexes
2026-01-01 19:34:11 +08:00
yyhuni
b1c9e202dd feat(sidebar): add feedback link to secondary navigation menu
- Import IconMessageReport icon from tabler/icons-react for feedback menu item
- Add feedback navigation item linking to GitHub issues page
- Add "feedback" translation key to English messages (en.json)
- Add "feedback" translation key to Chinese messages (zh.json) as "反馈建议"
- Improves user engagement by providing direct access to issue reporting
2026-01-01 18:31:34 +08:00
yyhuni
918669bc29 style(ui): update expandable cell whitespace handling for better formatting
- Change whitespace class from `whitespace-normal` to `whitespace-pre-wrap` in expandable cell component
- Improves text rendering by preserving whitespace and line breaks in cell content
- Ensures consistent formatting display across different content types (mono, url, muted variants)
2026-01-01 16:41:47 +08:00
yyhuni
fd70b0544d docs(frontend): update Chinese translations to English for consistency
- Change "响应头" to "Response Headers" in endpoint messages
- Change "响应头" to "Response Headers" in website messages
- Maintain consistency across frontend message translations
- Improve clarity for international users by standardizing field labels
2026-01-01 16:23:03 +08:00
github-actions[bot]
0f2df7a5f3 chore: bump version to v1.2.14-dev 2026-01-01 05:13:25 +00:00
yyhuni
857ab737b5 feat(fingerprint): enhance xingfinger task with snapshot tracking and field merging
- Replace `not_found_count` with `created_count` and `snapshot_count` metrics in fingerprint detect flow
- Initialize and aggregate `snapshot_count` across tool statistics
- Refactor `parse_xingfinger_line()` to return structured dict with url, techs, server, title, status_code, and content_length
- Replace `bulk_merge_tech_field()` with `bulk_merge_website_fields()` to support merging multiple WebSite fields
- Implement smart merge strategy: arrays deduplicated, scalar fields only updated when empty/NULL
- Remove dynamic model loading via importlib in favor of direct WebSite model import
- Add WebsiteSnapshotDTO and DjangoWebsiteSnapshotRepository imports for snapshot handling
- Improve xingfinger output parsing to capture server, title, and HTTP metadata alongside technology detection
2026-01-01 12:40:49 +08:00
yyhuni
ee2d99edda feat(asset): add response headers tracking to endpoints and websites
- Add response_headers field to Endpoint and WebSite models as JSONField
- Add response_headers field to EndpointSnapshot and WebsiteSnapshot models
- Update all related DTOs to include response_headers with Dict[str, Any] type
- Add GIN indexes on response_headers fields for optimized JSON queries
- Update endpoint and website repositories to handle response_headers data
- Update serializers to include response_headers in API responses
- Update frontend components to display response headers in detail views
- Add response_headers to fingerprint detection and site scan tasks
- Update command templates and engine config to support header extraction
- Add internationalization strings for response headers in en.json and zh.json
- Update TypeScript types for endpoint and website to include response_headers
- Enhance scan history and target detail pages to show response header information
2026-01-01 12:25:22 +08:00
github-actions[bot]
db6ce16aca chore: bump version to v1.2.13-dev 2026-01-01 02:24:08 +00:00
yyhuni
ab800eca06 feat(frontend): reorder navigation tabs for improved UX
- Move "Websites" tab to first position in scan history and target layouts
- Reposition "IP Addresses" tab before "Ports" for better logical flow
- Maintain consistent tab ordering across both scan history and target pages
- Improve navigation hierarchy by placing primary discovery results first
2026-01-01 09:47:30 +08:00
yyhuni
e8e5572339 perf(asset): add GIN indexes for tech array fields and improve query parser
- Add GinIndex for tech array field in Endpoint model to optimize __contains queries
- Add GinIndex for tech array field in WebSite model to optimize __contains queries
- Import GinIndex from django.contrib.postgres.indexes
- Refactor QueryParser to protect quoted filter values during tokenization
- Implement placeholder-based filter extraction to preserve spaces within quoted values
- Replace filter tokens with placeholders before logical operator normalization
- Restore original filter conditions from placeholders during parsing
- Fix spacing in comments for consistency (add space after "从")
- Improves query performance for technology stack filtering on large datasets
2026-01-01 08:58:03 +08:00
github-actions[bot]
d48d4bbcad chore: bump version to v1.2.12-dev 2025-12-31 16:01:48 +00:00
yyhuni
d1cca4c083 base timeout set 10s 2025-12-31 23:27:02 +08:00
yyhuni
df0810c863 feat: add fingerprint recognition feature and update documentation
- Add fingerprint recognition section to README with support for 2.7W+ rules from multiple sources (EHole, Goby, Wappalyzer, Fingers, FingerPrintHub, ARL)
- Update scanning pipeline architecture diagram to include fingerprint recognition stage between site identification and deep analysis
- Add fingerprint recognition styling to mermaid diagram for visual consistency
- Include WORKER_API_KEY environment variable in task distributor for worker authentication
- Update WeChat QR code image and public account name from "洋洋的小黑屋" to "塔罗安全学苑"
- Fix import statements in nav-system.tsx to use i18n navigation utilities instead of next/link and next/navigation
- Enhance scanning workflow documentation to reflect complete pipeline: subdomain discovery → port scanning → site identification → fingerprint recognition → URL collection → directory scanning → vulnerability scanning
2025-12-31 23:09:25 +08:00
yyhuni
d33e54c440 docs: simplify quick-start guide
- Remove alternative ZIP download method, keep only Git clone approach
- Remove update.sh script reference from service management section
- Remove dedicated "定期更新" (periodic updates) section
- Streamline documentation to focus on primary installation and usage paths
2025-12-31 22:50:08 +08:00
yyhuni
35a306fe8b fix:dev环境 2025-12-31 22:46:42 +08:00
yyhuni
724df82931 chore: pin Docker base image digests and add worker API key generation
- Pin golang:1.24 base image to specific digest to prevent upstream cache invalidation
- Pin ubuntu:24.04 base image to specific digest to prevent upstream cache invalidation
- Add WORKER_API_KEY generation in install.sh auto_fill_docker_env_secrets function
- Generate random 32-character string for WORKER_API_KEY during installation
- Update installation info message to include WORKER_API_KEY in generated secrets list
- Improve build reproducibility and security by using immutable image references
2025-12-31 22:40:38 +08:00
yyhuni
8dfffdf802 fix:认证 2025-12-31 22:21:40 +08:00
github-actions[bot]
b8cb85ce0b chore: bump version to v1.2.9-dev 2025-12-31 13:48:44 +00:00
yyhuni
da96d437a4 增加授权认证 2025-12-31 20:18:34 +08:00
github-actions[bot]
feaf8062e5 chore: bump version to v1.2.8-dev 2025-12-31 11:33:14 +00:00
yyhuni
4bab76f233 fix:组织删除问题 2025-12-31 17:50:37 +08:00
yyhuni
09416b4615 fix:redis端口 2025-12-31 17:45:25 +08:00
github-actions[bot]
bc1c5f6b0e chore: bump version to v1.2.7-dev 2025-12-31 06:16:42 +00:00
github-actions[bot]
2f2742e6fe chore: bump version to v1.2.6-dev 2025-12-31 05:29:36 +00:00
yyhuni
be3c346a74 增加搜索字段 2025-12-31 12:40:21 +08:00
yyhuni
0c7a6fff12 增加tech字段的搜索 2025-12-31 12:37:02 +08:00
yyhuni
3b4f0e3147 fix:指纹识别 2025-12-31 12:30:31 +08:00
yyhuni
51212a2a0c fix:指纹识别 2025-12-31 12:17:23 +08:00
yyhuni
58533bbaf6 fix:docker api 2025-12-31 12:03:08 +08:00
github-actions[bot]
6ccca1602d chore: bump version to v1.2.5-dev 2025-12-31 03:48:32 +00:00
yyhuni
6389b0f672 feat(fingerprints): Add type annotation to getAcceptConfig function
- Add explicit return type annotation `Record<string, string[]>` to getAcceptConfig function
- Improve type safety and IDE autocomplete for file type configuration
- Enhance code clarity for accepted file types mapping in import dialog
2025-12-31 10:17:25 +08:00
yyhuni
d7599b8599 feat(fingerprints): Add database indexes and expand test data generation
- Add database indexes on 'link' field in FingersFingerprint model for improved query performance
- Add database index on 'author' field in FingerPrintHubFingerprint model for filtering optimization
- Expand test data generation to include Fingers, FingerPrintHub, and ARL fingerprint types
- Add comprehensive fingerprint data generation methods with realistic templates and patterns
- Update test data cleanup to include all fingerprint table types
- Add i18n translations for fingerprint-related UI components and labels
- Optimize route prefetching hook for better performance
- Improve fingerprint data table columns and vulnerability columns display consistencyzxc
2025-12-31 10:04:15 +08:00
yyhuni
8eff298293 更新镜像加速逻辑 2025-12-31 08:56:55 +08:00
yyhuni
3634101c5b 添加灯塔等指纹 2025-12-31 08:55:37 +08:00
yyhuni
163973a7df feat(i18n): Add internationalization support to dropzone component
- Add useTranslations hook to DropzoneContent component for multi-language support
- Add useTranslations hook to DropzoneEmptyState component for multi-language support
- Replace hardcoded English strings with i18n translation keys in dropzone UI
- Add comprehensive translation keys for dropzone messages in en.json:
* uploadFile, uploadFiles, dragOrClick, dragOrClickReplace
* moreFiles, supports, minimum, maximum, sizeBetween
- Add corresponding Chinese translations in zh.json for all dropzone messages
- Support dynamic content in translations using parameterized keys (files count, size ranges)
- Ensure consistent user experience across English and Chinese interfaces
2025-12-30 21:19:37 +08:00
yyhuni
80ffecba3e feat(i18n): Add UI component i18n provider and standardize translation keys
- Add UiI18nProvider component to wrap UI library translations globally
- Integrate UiI18nProvider into root layout for consistent i18n support
- Standardize download action translation keys (allEndpoints → all, selectedEndpoints → selected)
- Update ExpandableTagList component prop from maxVisible to maxLines for better layout control
- Fix color scheme in dashboard stop scan button (chart-2 → primary)
- Add DOCKER_API_VERSION configuration to backend settings for Docker client compatibility
- Update task distributor to use configurable Docker API version (default 1.40)
- Add environment variable support for Docker API version in task execution commands
- Update i18n configuration and message files with standardized keys
- Ensure UI components respect application locale settings across all data tables and dialogs
2025-12-30 21:19:28 +08:00
yyhuni
3c21ac940c 恢复ssh docker 2025-12-30 20:35:51 +08:00
yyhuni
5c9f484d70 fix(frontend): Fix i18n translation key references and add missing labels
- Change "nav" translation namespace to "navigation" in scan engine and wordlists pages
- Replace parameterized translation calls with raw translation strings for cron schedule options in scheduled scan page and dashboard component
- Cast raw translation results to string type for proper TypeScript typing
- Add missing "name" and "type" labels to fingerprint section in English and Chinese message files
- Ensure consistent translation key usage across components for better maintainability
2025-12-30 18:21:16 +08:00
yyhuni
7567f6c25b 更新文字描述 2025-12-30 18:08:39 +08:00
yyhuni
0599a0b298 ansi-to-html加入 2025-12-30 18:01:29 +08:00
yyhuni
f7557fe90c ansi-to-html替代log显示 2025-12-30 18:01:22 +08:00
yyhuni
13571b9772 fix(frontend): Fix xterm SSR initialization error
- Add 100ms delay for terminal initialization to ensure DOM is mounted
- Use requestAnimationFrame for fit() to avoid dimensions error
- Add try-catch for all xterm operations
- Proper cleanup on unmount

Fixes: Cannot read properties of undefined (reading 'dimensions')
2025-12-30 17:41:38 +08:00
yyhuni
8ee76eef69 feat(frontend): Add ANSI color support for system logs
- Create AnsiLogViewer component using xterm.js
- Replace Monaco Editor with xterm for log viewing
- Native ANSI escape code rendering (colors, bold, etc.)
- Auto-scroll to bottom, clickable URLs support

Benefits:
- Colorized logs for better readability
- No more escape codes like [32m[0m in UI
- Professional terminal-like experience
2025-12-30 17:39:12 +08:00
yyhuni
2a31e29aa2 fix: Add shell quoting for command arguments
- Use shlex.quote() to escape special characters in argument values
- Fixes: 'unrecognized arguments' error when values contain spaces
- Example: target_name='example.com scan' now correctly quoted
2025-12-30 17:32:09 +08:00
yyhuni
81abc59961 Refactor: Migrate TaskDistributor to Docker SDK
- Replace CLI subprocess with Python Docker SDK
- Add DockerClientManager for unified container management
- Remove 300+ lines of shell command building code
- Enable future features: container status monitoring, log streaming

Breaking changes: None (backward compatible with existing scans)
Rollback: git reset --hard v1.0-before-docker-sdk
2025-12-30 17:23:18 +08:00
yyhuni
ffbfec6dd5 feat(stage2): Refactor TaskDistributor to use Docker SDK
- Replace CLI subprocess calls with DockerClientManager.run_container()
- Add helper methods: _build_container_command, _build_container_environment, _build_container_volumes
- Refactor execute_scan_flow() and execute_cleanup_on_all_workers() to use SDK
- Remove old CLI methods: _build_docker_command, _execute_docker_command, _execute_local_docker, _execute_ssh_docker
- Remove paramiko import (no longer needed for local workers)

Benefits:
- 300+ lines removed (CLI string building complexity)
- Type-safe container configuration (no more shlex.quote errors)
- Structured error handling (ImageNotFound, APIError)
- Ready for container status monitoring and log streaming
2025-12-30 17:20:26 +08:00
yyhuni
a0091636a8 feat(stage1): Add DockerClientManager
- Create docker_client_manager.py with local Docker client support
- Add container lifecycle management (run, status, logs, stop, remove)
- Implement structured error handling (ImageNotFound, APIError)
- Add client connection caching and reuse
- Set Docker API version to 1.40 (compatible with Docker 19.03+)
- Add dependencies: docker>=6.0.0, packaging>=21.0

TODO: Remote worker support (Docker Context or SSH tunnel)
2025-12-30 17:17:17 +08:00
yyhuni
69490ab396 feat: Add DockerClientManager for unified Docker client management
- Create docker_client_manager.py with local Docker client support
- Add container lifecycle management (run, status, logs, stop, remove)
- Implement structured error handling (ImageNotFound, APIError)
- Add client connection caching and reuse
- Set Docker API version to 1.40 (compatible with Docker 19.03+)
- Add docker>=6.0.0 and packaging>=21.0 dependencies

TODO: Remote worker support (Docker Context or SSH tunnel)
2025-12-30 17:15:29 +08:00
yyhuni
7306964abf 更新readme 2025-12-30 16:44:08 +08:00
yyhuni
cb6b0259e3 fix:响应不匹配 2025-12-30 16:40:17 +08:00
yyhuni
e1b4618e58 refactor(worker): isolate scan tools to dedicated directory
- Move scan tools base path from `/usr/local/bin` to `/opt/xingrin-tools/bin` to avoid conflicts with system tools and Python packages
- Create dedicated `/opt/xingrin-tools/bin` directory in worker Dockerfile following FHS standards
- Update PATH environment variable to prioritize project-specific tools directory
- Add `SCAN_TOOLS_PATH` environment variable to `.env.example` with documentation
- Update settings.py to use new default path with explanatory comments
- Fix TypeScript type annotation in system-logs-view.tsx for better maintainability
- Remove frontend package-lock.json to reduce repository size
- Update task distributor comment to reflect new tool location
This change improves tool isolation and prevents naming conflicts while maintaining FHS compliance.
2025-12-30 11:42:09 +08:00
yyhuni
556dcf5f62 重构日志ui功能 2025-12-30 11:13:38 +08:00
yyhuni
0628eef025 重构响应为标准响应格式 2025-12-30 10:56:26 +08:00
yyhuni
38ed8bc642 fix(scan): improve config parser validation and enable subdomain resolve timeout
- Uncomment timeout: auto setting in subdomain discovery config example
- Add validation to reject None or non-dict configuration values
- Raise ValueError with descriptive message when config is None
- Raise ValueError when config is not a dictionary type
- Update docstring to document Raises section for error conditions
- Prevent silent failures from malformed YAML configurations
2025-12-30 08:54:02 +08:00
yyhuni
2f4d6a2168 统一工具挂载为/usr/local/bin 2025-12-30 08:45:36 +08:00
yyhuni
c25cb9e06b fix:工具挂载 2025-12-30 08:39:17 +08:00
yyhuni
b14ab71c7f fix:auth frontend 2025-12-30 08:12:04 +08:00
github-actions[bot]
8b5060e2d3 chore: bump version to v1.2.2-dev 2025-12-29 17:08:05 +00:00
yyhuni
3c9335febf refactor: determine target branch by tag location instead of naming
- Check which branch contains the tag (main or dev)
- Update VERSION file on the source branch
- Only tags from main branch update 'latest' Docker tag
- More flexible and follows standard Git workflow
2025-12-29 23:34:05 +08:00
yyhuni
1b95e4f2c3 feat: update VERSION file for dev tags on dev branch
- Dev tags (v*-dev) now update VERSION file on dev branch
- Release tags (v* without suffix) update VERSION file on main branch
- Keeps main and dev branches independent
2025-12-29 23:30:17 +08:00
yyhuni
d20a600afc refactor: use settings.GIT_MIRROR instead of os.getenv in worker_views 2025-12-29 23:13:35 +08:00
yyhuni
c29b11fd37 feat: add GIT_MIRROR to worker config center
- Add gitMirror field to worker configuration API
- Container bootstrap reads gitMirror and sets GIT_MIRROR env var
- Remove redundant GIT_MIRROR injection from task_distributor
- All environment variables are managed through config center
2025-12-29 23:11:31 +08:00
yyhuni
6caf707072 refactor: replace Chinese comments with English in frontend components
- Replace all Chinese inline comments with English equivalents across 24 frontend component files
- Update JSDoc comments to use English for better code documentation
- Improve code readability and maintainability for international development team
- Standardize comment style across directories, endpoints, ip-addresses, subdomains, and websites components
- Ensure consistency with previous frontend refactoring efforts
2025-12-29 23:01:16 +08:00
yyhuni
2627b1fc40 refactor: replace Chinese comments with English across frontend components
- Replace Chinese comments with English in fingerprint components (ehole, goby, wappalyzer)
- Update comments in scan engine, history, and scheduled scan modules
- Translate comments in worker deployment and configuration dialogs
- Update comments in subdomain management and target components
- Translate comments in tools configuration and command modules
- Replace Chinese comments in vulnerability components
- Improve code maintainability and consistency with English documentation standards
- Update Docker build workflow cache configuration with image-specific scopes for better cache isolation
2025-12-29 22:14:12 +08:00
yyhuni
ec6712b9b4 fix: add null coalescing to prevent undefined values in i18n translations
- Add null coalescing operator (?? "") to all i18n translation parameters across components
- Fix scheduled scan deletion dialog to handle undefined scheduled scan name
- Fix nuclei page to pass locale parameter to formatDateTime function
- Fix organization detail view unlink target dialog to handle undefined target name
- Fix organization list deletion dialog to handle undefined organization name
- Fix organization targets detail view unlink dialog to handle undefined target name
- Fix engine edit dialog to handle undefined engine name
- Fix scan history list deletion and stop dialogs to handle undefined target names
- Fix worker list deletion dialog to handle undefined worker name
- Fix all targets detail view deletion dialog to handle undefined target name
- Fix custom tools and opensource tools lists to handle undefined tool names
- Fix vulnerabilities detail view to handle undefined vulnerability names
- Prevents runtime errors when translation parameters are undefined or null
2025-12-29 21:03:47 +08:00
yyhuni
9d5e4d5408 fix(scan/engine): handle undefined engine name in delete confirmation
- Add nullish coalescing operator to prevent undefined value in delete confirmation message
- Ensure engineToDelete?.name defaults to empty string when undefined
- Improve robustness of alert dialog description rendering
2025-12-29 20:54:00 +08:00
yyhuni
c5d5b24c8f 更新github action dev版本不更新version 2025-12-29 20:48:42 +08:00
yyhuni
671cb56b62 fix:nuclei模板加速同步,模板下载到宿主机同步更新 2025-12-29 20:43:49 +08:00
yyhuni
51025f69a8 fix:大陆加速修复 2025-12-29 20:15:25 +08:00
yyhuni
b2403b29c4 删除update.sh 2025-12-29 20:08:40 +08:00
yyhuni
18ef01a47b fix:cn加速 2025-12-29 20:03:14 +08:00
yyhuni
0bf8108fb3 fix:镜像加速 2025-12-29 19:51:33 +08:00
yyhuni
837ad19131 fix:镜像加速问题 2025-12-29 19:48:48 +08:00
yyhuni
d7de9a7129 fix:镜像加速问题 2025-12-29 19:39:59 +08:00
yyhuni
22b4e51b42 feat(xget): add Git URL acceleration support via Xget proxy
- Add xget_proxy utility module to convert Git repository URLs to Xget proxy format
- Support domain mapping for GitHub, GitLab, Gitea, and Codeberg repositories
- Integrate Xget proxy into Nuclei template repository cloning process
- Add XGET_MIRROR environment variable configuration in container bootstrap
- Export XGET_MIRROR setting to worker node configuration endpoint
- Add --mirror flag to install.sh for easy Xget acceleration setup
- Add configure_docker_mirror function to install.sh for Docker registry mirror configuration
- Enable Git clone acceleration for faster template repository downloads in air-gapped or bandwidth-limited environments
2025-12-29 19:32:05 +08:00
yyhuni
d03628ee45 feat(i18n): translate Chinese comments to English in scan history component
- Replace Chinese console error messages with English equivalents
- Translate all inline code comments from Chinese to English
- Update dialog and section comments for consistency
- Improve code readability and maintainability for international development team
2025-12-29 18:42:13 +08:00
yyhuni
0baabe0753 feat(i18n): internationalize frontend components with English translations
- Replace Chinese comments with English equivalents across auth, dashboard, and scan components
- Update UI text labels and descriptions from Chinese to English in bulk-add-urls-dialog
- Translate placeholder text and dialog titles in asset management components
- Update column headers and data table labels to English in organization and engine modules
- Standardize English documentation strings in auth-guard and auth-layout components
- Improve code maintainability and accessibility for international users
- Align with existing internationalization efforts across the frontend codebase
2025-12-29 18:39:25 +08:00
yyhuni
e1191d7abf 国际化前端ui 2025-12-29 18:10:05 +08:00
yyhuni
82a2e9a0e7 国际化前端 2025-12-29 18:09:57 +08:00
yyhuni
1ccd1bc338 更新gfPatterns 2025-12-28 20:26:32 +08:00
yyhuni
b4d42f5372 更新指纹管理搜索 2025-12-28 20:18:26 +08:00
yyhuni
2c66450756 统一ui 2025-12-28 20:10:46 +08:00
yyhuni
119d82dc89 更新ui 2025-12-28 20:06:17 +08:00
yyhuni
fba7f7c508 更新ui 2025-12-28 19:55:57 +08:00
yyhuni
99d384ce29 修复前端列宽 2025-12-28 16:37:35 +08:00
yyhuni
07f36718ab 重构前端 2025-12-28 16:27:01 +08:00
yyhuni
7e3f69c208 重构前端组件 2025-12-28 12:05:47 +08:00
yyhuni
5f90473c3c fix:ui 2025-12-28 08:48:25 +08:00
yyhuni
e2a815b96a 增加:goby wappalyzer指纹 2025-12-28 08:42:37 +08:00
yyhuni
f86a1a9d47 优化ui 2025-12-27 22:01:40 +08:00
yyhuni
d5945679aa 增加日志 2025-12-27 21:50:43 +08:00
yyhuni
51e2c51748 fix:目录创建挂载 2025-12-27 21:44:47 +08:00
yyhuni
e2cbf98dda fix:target name已去除的bug 2025-12-27 21:27:05 +08:00
yyhuni
cd72bdf7c3 指纹接入 2025-12-27 20:19:25 +08:00
yyhuni
35abcf7e39 加入黑名单逻辑 2025-12-27 20:12:01 +08:00
yyhuni
09f2d343a4 新增:重构导出逻辑代码,加入黑名单过滤 2025-12-27 20:11:50 +08:00
yyhuni
54d1f86bde fix:安装报错 2025-12-27 17:51:32 +08:00
yyhuni
a3997c9676 更新yaml 2025-12-27 12:52:49 +08:00
yyhuni
c90a55f85e 更新负载逻辑 2025-12-27 12:49:14 +08:00
yyhuni
2eab88b452 chore(install): Add banner display and update confirmation
- Add show_banner() function to display XingRin ASCII art logo
- Call show_banner() before header in install.sh initialization
- Add experimental feature warning in update.sh with user confirmation
- Prompt user to confirm before proceeding with update operation
- Suggest full reinstall via uninstall.sh and install.sh as alternative
- Improve user experience with visual feedback and safety checks
2025-12-27 12:41:04 +08:00
yyhuni
1baf0eb5e1 fix:指纹扫描命令 2025-12-27 12:29:50 +08:00
yyhuni
b61e73f7be fix:json输出 2025-12-27 12:14:35 +08:00
yyhuni
e896734dfc feat(scan-engine): Add fingerprint detection feature flag
- Add fingerprint_detect feature flag to engine configuration parser
- Enable fingerprint detection capability in scan engine features
- Integrate fingerprint detection into existing feature detection logic
2025-12-27 11:59:51 +08:00
yyhuni
cd83f52f35 新增指纹识别 2025-12-27 11:39:26 +08:00
yyhuni
3e29554c36 新增:指纹识别 2025-12-27 11:39:19 +08:00
yyhuni
18e02b536e 加入:指纹识别 2025-12-27 10:06:23 +08:00
yyhuni
4c1c6f70ab 更新指纹 2025-12-26 21:50:38 +08:00
yyhuni
a72e7675f5 更新ui 2025-12-26 21:40:56 +08:00
yyhuni
93c2163764 新增:ehole指纹的导入 2025-12-26 21:34:36 +08:00
yyhuni
de72c91561 更新ui 2025-12-25 18:31:09 +08:00
github-actions[bot]
3e6d060b75 chore: bump version to v1.1.14 2025-12-25 10:11:08 +00:00
yyhuni
766f045904 fix:ffuf并发问题 2025-12-25 18:02:25 +08:00
yyhuni
8acfe1cc33 调整日志级别 2025-12-25 17:44:31 +08:00
github-actions[bot]
7aec3eabb2 chore: bump version to v1.1.13 2025-12-25 08:29:39 +00:00
yyhuni
b1f11c36a4 fix:字典下载端口 2025-12-25 16:21:32 +08:00
yyhuni
d97fb5245a 修复:提示 2025-12-25 16:18:46 +08:00
github-actions[bot]
ddf9a1f5a4 chore: bump version to v1.1.12 2025-12-25 08:10:57 +00:00
yyhuni
47f9f96a4b 更新文档 2025-12-25 16:07:30 +08:00
yyhuni
6f43e73162 readme up 2025-12-25 16:06:01 +08:00
yyhuni
9b7d496f3e 更新:端口号为8083 2025-12-25 16:02:55 +08:00
github-actions[bot]
6390849d52 chore: bump version to v1.1.11 2025-12-25 03:58:05 +00:00
yyhuni
7a6d2054f6 更新:ui 2025-12-25 11:50:21 +08:00
yyhuni
73ebaab232 更新:ui 2025-12-25 11:31:25 +08:00
github-actions[bot]
11899b29c2 chore: bump version to v1.1.10 2025-12-25 03:20:57 +00:00
github-actions[bot]
877d2a56d1 chore: bump version to v1.1.9 2025-12-25 03:13:58 +00:00
yyhuni
dc1e94f038 更新:ui 2025-12-25 11:12:51 +08:00
yyhuni
9c3833d13d 更新:ui 2025-12-25 11:06:00 +08:00
github-actions[bot]
92f3b722ef chore: bump version to v1.1.8 2025-12-25 02:16:12 +00:00
yyhuni
9ef503c666 更新:ui 2025-12-25 10:12:06 +08:00
yyhuni
c3a43e94fa 修复:ui 2025-12-25 10:08:25 +08:00
github-actions[bot]
d6d94355fb chore: bump version to v1.1.7 2025-12-25 02:02:27 +00:00
yyhuni
bc638eabf4 更新:ui 2025-12-25 10:02:13 +08:00
yyhuni
5acaada7ab 新增:支持多字段搜索功能 2025-12-25 09:54:50 +08:00
github-actions[bot]
aaad3f29cf chore: bump version to v1.1.6 2025-12-24 12:19:12 +00:00
yyhuni
f13eb2d9b2 更新:ui风格 2025-12-24 20:10:12 +08:00
yyhuni
f1b3b60382 新增:EVA主题 2025-12-24 19:57:26 +08:00
yyhuni
e249056289 Update README.md 2025-12-24 19:14:22 +08:00
yyhuni
dba195b83a 更新readme 2025-12-24 17:28:08 +08:00
github-actions[bot]
9b494e6c67 chore: bump version to v1.1.5 2025-12-24 09:23:21 +00:00
1079 changed files with 396234 additions and 33989 deletions

View File

@@ -0,0 +1,45 @@
name: Check Generated Files
on:
workflow_call: # 只在被其他 workflow 调用时运行
permissions:
contents: read
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.21
- name: Generate files for all workflows
working-directory: worker
run: make generate
- name: Check for differences
run: |
if ! git diff --exit-code; then
echo "❌ Generated files are out of date!"
echo "Please run: cd worker && make generate"
echo ""
echo "Changed files:"
git status --porcelain
echo ""
echo "Diff:"
git diff
exit 1
fi
echo "✅ Generated files are up to date"
- name: Run metadata consistency tests
working-directory: worker
run: make test-metadata
- name: Run all tests
working-directory: worker
run: make test

View File

@@ -1,138 +0,0 @@
name: Build and Push Docker Images
on:
push:
tags:
- 'v*' # 只在推送 v 开头的 tag 时触发(如 v1.0.0
workflow_dispatch: # 手动触发
# 并发控制:同一分支只保留最新的构建,取消之前正在运行的
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
REGISTRY: docker.io
IMAGE_PREFIX: yyhuni
permissions:
contents: write
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- image: xingrin-server
dockerfile: docker/server/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-frontend
dockerfile: docker/frontend/Dockerfile
context: .
platforms: linux/amd64 # ARM64 构建时 Next.js 在 QEMU 下会崩溃
- image: xingrin-worker
dockerfile: docker/worker/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-nginx
dockerfile: docker/nginx/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-agent
dockerfile: docker/agent/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Free disk space (for large builds like worker)
run: |
echo "=== Before cleanup ==="
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker image prune -af
echo "=== After cleanup ==="
df -h
- name: Generate SSL certificates for nginx build
if: matrix.image == 'xingrin-nginx'
run: |
mkdir -p docker/nginx/ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout docker/nginx/ssl/privkey.pem \
-out docker/nginx/ssl/fullchain.pem \
-subj "/CN=localhost"
echo "SSL certificates generated for CI build"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version from git tag
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "IS_RELEASE=true" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
echo "IS_RELEASE=false" >> $GITHUB_OUTPUT
fi
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: true
tags: |
${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}
${{ steps.version.outputs.IS_RELEASE == 'true' && format('{0}/{1}:latest', env.IMAGE_PREFIX, matrix.image) || '' }}
build-args: |
IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=gha
cache-to: type=gha,mode=max
provenance: false
sbom: false
# 所有镜像构建成功后,更新 VERSION 文件
update-version:
runs-on: ubuntu-latest
needs: build
if: startsWith(github.ref, 'refs/tags/v')
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: main
token: ${{ secrets.GITHUB_TOKEN }}
- name: Update VERSION file
run: |
VERSION="${GITHUB_REF#refs/tags/}"
echo "$VERSION" > VERSION
echo "Updated VERSION to $VERSION"
- name: Commit and push
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add VERSION
git diff --staged --quiet || git commit -m "chore: bump version to ${GITHUB_REF#refs/tags/}"
git push

159
.gitignore vendored
View File

@@ -1,136 +1,51 @@
# ============================
# 操作系统相关文件
# ============================
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Go
*.exe
*.exe~
*.dll
*.so
*.dylib
*.test
*.out
vendor/
go.work
# ============================
# 前端 (Next.js/Node.js) 相关
# ============================
# 依赖目录
front-back/node_modules/
front-back/.pnpm-store/
# Build artifacts
dist/
build/
bin/
# Next.js 构建产物
front-back/.next/
front-back/out/
front-back/dist/
# 环境变量文件
front-back/.env
front-back/.env.local
front-back/.env.development.local
front-back/.env.test.local
front-back/.env.production.local
# 运行时和缓存
front-back/.turbo/
front-back/.swc/
front-back/.eslintcache
front-back/.tsbuildinfo
# ============================
# 后端 (Python/Django) 相关
# ============================
# Python 虚拟环境
.venv/
venv/
env/
ENV/
# Python 编译文件
*.pyc
*.pyo
*.pyd
__pycache__/
*.py[cod]
*$py.class
# Django 相关
backend/db.sqlite3
backend/db.sqlite3-journal
backend/media/
backend/staticfiles/
backend/.env
backend/.env.local
# Python 测试和覆盖率
.pytest_cache/
.coverage
htmlcov/
*.cover
# ============================
# 后端 (Go) 相关
# ============================
# 编译产物
backend/bin/
backend/dist/
backend/*.exe
backend/*.exe~
backend/*.dll
backend/*.so
backend/*.dylib
# 测试相关
backend/*.test
backend/*.out
backend/*.prof
# Go workspace 文件
backend/go.work
backend/go.work.sum
# Go 依赖管理
backend/vendor/
# ============================
# IDE 和编辑器相关
# ============================
# IDE
.vscode/
.idea/
.cursor/
.claude/
.kiro/
.playwright-mcp/
*.swp
*.swo
*~
.DS_Store
# ============================
# Docker 相关
# ============================
docker/.env
docker/.env.local
# SSL 证书和私钥(不应提交)
docker/nginx/ssl/*.pem
docker/nginx/ssl/*.key
docker/nginx/ssl/*.crt
# ============================
# 日志文件和扫描结果
# ============================
# Environment
.env
.env.local
.env.*.local
*.log
logs/
results/
.venv/
# 开发脚本运行时文件(进程 ID 和启动日志)
backend/scripts/dev/.pids/
# Testing
coverage.txt
*.coverprofile
.hypothesis/
# ============================
# 临时文件
# ============================
# Temporary files
*.tmp
tmp/
temp/
.cache/
HGETALL
KEYS
vuln_scan/input_endpoints.txt
open-in-v0
.kiro/
.claude/
.specify/
# AI Assistant directories
codex/
openspec/
specs/
AGENTS.md
WARP.md

4
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,4 @@
{
"typescript.autoClosingTags": false,
"kiroAgent.configureMCP": "Enabled"
}

268
README.md
View File

@@ -1,268 +0,0 @@
<h1 align="center">XingRin - 星环</h1>
<p align="center">
<b>🛡️ 攻击面管理平台 (ASM) | 自动化资产发现与漏洞扫描系统</b>
</p>
<p align="center">
<a href="https://github.com/yyhuni/xingrin/stargazers"><img src="https://img.shields.io/github/stars/yyhuni/xingrin?style=flat-square&logo=github" alt="GitHub stars"></a>
<a href="https://github.com/yyhuni/xingrin/network/members"><img src="https://img.shields.io/github/forks/yyhuni/xingrin?style=flat-square&logo=github" alt="GitHub forks"></a>
<a href="https://github.com/yyhuni/xingrin/issues"><img src="https://img.shields.io/github/issues/yyhuni/xingrin?style=flat-square&logo=github" alt="GitHub issues"></a>
<a href="https://github.com/yyhuni/xingrin/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-PolyForm%20NC-blue?style=flat-square" alt="License"></a>
</p>
<p align="center">
<a href="#-功能特性">功能特性</a> •
<a href="#-快速开始">快速开始</a> •
<a href="#-文档">文档</a> •
<a href="#-技术栈">技术栈</a> •
<a href="#-反馈与贡献">反馈与贡献</a>
</p>
<p align="center">
<sub>🔍 关键词: ASM | 攻击面管理 | 漏洞扫描 | 资产发现 | Bug Bounty | 渗透测试 | Nuclei | 子域名枚举 | EASM</sub>
</p>
---
<p align="center">
<b>🌗 明暗模式切换</b>
</p>
<p align="center">
<img src="docs/screenshots/light.png" alt="Light Mode" width="49%">
<img src="docs/screenshots/dark.png" alt="Dark Mode" width="49%">
</p>
<p align="center">
<b>🎨 多种 UI 主题</b>
</p>
<p align="center">
<img src="docs/screenshots/bubblegum.png" alt="Bubblegum" width="32%">
<img src="docs/screenshots/cosmic-night.png" alt="Cosmic Night" width="32%">
<img src="docs/screenshots/quantum-rose.png" alt="Quantum Rose" width="32%">
</p>
## 📚 文档
- [📖 技术文档](./docs/README.md) - 技术文档导航(🚧 持续完善中)
- [🚀 快速开始](./docs/quick-start.md) - 一键安装和部署指南
- [🔄 版本管理](./docs/version-management.md) - Git Tag 驱动的自动化版本管理系统
- [📦 Nuclei 模板架构](./docs/nuclei-template-architecture.md) - 模板仓库的存储与同步
- [📖 字典文件架构](./docs/wordlist-architecture.md) - 字典文件的存储与同步
- [🔍 扫描流程架构](./docs/scan-flow-architecture.md) - 完整扫描流程与工具编排
---
## ✨ 功能特性
### 🎯 目标与资产管理
- **组织管理** - 多层级目标组织,灵活分组
- **目标管理** - 支持域名、IP目标类型
- **资产发现** - 子域名、网站、端点、目录自动发现
- **资产快照** - 扫描结果快照对比,追踪资产变化
### 🔍 漏洞扫描
- **多引擎支持** - 集成 Nuclei 等主流扫描引擎
- **自定义流程** - YAML 配置扫描流程,灵活编排
- **定时扫描** - Cron 表达式配置,自动化周期扫描
#### 扫描流程架构
完整的扫描流程包括子域名发现、端口扫描、站点发现、URL 收集、目录扫描、漏洞扫描等阶段
```mermaid
flowchart LR
START["开始扫描"]
subgraph STAGE1["阶段 1: 资产发现"]
direction TB
SUB["子域名发现<br/>subfinder, amass, puredns"]
PORT["端口扫描<br/>naabu"]
SITE["站点识别<br/>httpx"]
SUB --> PORT --> SITE
end
subgraph STAGE2["阶段 2: 深度分析"]
direction TB
URL["URL 收集<br/>waymore, katana"]
DIR["目录扫描<br/>ffuf"]
end
subgraph STAGE3["阶段 3: 漏洞检测"]
VULN["漏洞扫描<br/>nuclei, dalfox"]
end
FINISH["扫描完成"]
START --> STAGE1
SITE --> STAGE2
STAGE2 --> STAGE3
STAGE3 --> FINISH
style START fill:#34495e,stroke:#2c3e50,stroke-width:2px,color:#fff
style FINISH fill:#27ae60,stroke:#229954,stroke-width:2px,color:#fff
style STAGE1 fill:#3498db,stroke:#2980b9,stroke-width:2px,color:#fff
style STAGE2 fill:#9b59b6,stroke:#8e44ad,stroke-width:2px,color:#fff
style STAGE3 fill:#e67e22,stroke:#d35400,stroke-width:2px,color:#fff
style SUB fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style PORT fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style SITE fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style URL fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style DIR fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style VULN fill:#f0b27a,stroke:#e67e22,stroke-width:1px,color:#fff
```
详细说明请查看 [扫描流程架构文档](./docs/scan-flow-architecture.md)
### 🖥️ 分布式架构
- **多节点扫描** - 支持部署多个 Worker 节点,横向扩展扫描能力
- **本地节点** - 零配置,安装即自动注册本地 Docker Worker
- **远程节点** - SSH 一键部署远程 VPS 作为扫描节点
- **负载感知调度** - 实时感知节点负载,自动分发任务到最优节点
- **节点监控** - 实时心跳检测CPU/内存/磁盘状态监控
- **断线重连** - 节点离线自动检测,恢复后自动重新接入
```mermaid
flowchart TB
subgraph MASTER["主服务器 (Master Server)"]
direction TB
REDIS["Redis 负载缓存"]
subgraph SCHEDULER["任务调度器 (Task Distributor)"]
direction TB
SUBMIT["接收扫描任务"]
SELECT["负载感知选择"]
DISPATCH["智能分发"]
SUBMIT --> SELECT
SELECT --> DISPATCH
end
REDIS -.负载数据.-> SELECT
end
subgraph WORKERS["Worker 节点集群"]
direction TB
W1["Worker 1 (本地)<br/>CPU: 45% | MEM: 60%"]
W2["Worker 2 (远程)<br/>CPU: 30% | MEM: 40%"]
W3["Worker N (远程)<br/>CPU: 90% | MEM: 85%"]
end
DISPATCH -->|任务分发| W1
DISPATCH -->|任务分发| W2
DISPATCH -->|高负载跳过| W3
W1 -.心跳上报.-> REDIS
W2 -.心跳上报.-> REDIS
W3 -.心跳上报.-> REDIS
```
### 📊 可视化界面
- **数据统计** - 资产/漏洞统计仪表盘
- **实时通知** - WebSocket 消息推送
---
## 📦 快速开始
### 环境要求
- **操作系统**: Ubuntu 20.04+ / Debian 11+ (推荐)
- **硬件**: 2核 4G 内存起步20GB+ 磁盘空间
### 一键安装
```bash
# 克隆项目
git clone https://github.com/yyhuni/xingrin.git
cd xingrin
# 安装并启动(生产模式)
sudo ./install.sh
```
### 访问服务
- **Web 界面**: `https://localhost`
### 常用命令
```bash
# 启动服务
sudo ./start.sh
# 停止服务
sudo ./stop.sh
# 重启服务
sudo ./restart.sh
# 卸载
sudo ./uninstall.sh
# 更新
sudo ./update.sh
```
## 🤝 反馈与贡献
- 🐛 **如果发现 Bug** 可以点击右边链接进行提交 [Issue](https://github.com/yyhuni/xingrin/issues)
- 💡 **有新想法比如UI设计功能设计等** 欢迎点击右边链接进行提交建议 [Issue](https://github.com/yyhuni/xingrin/issues)
- 🔧 **想参与开发?** 关注我公众号与我个人联系
## 📧 联系
- 目前版本就我个人使用,可能会有很多边界问题
- 如有问题,建议,其他,优先提交[Issue](https://github.com/yyhuni/xingrin/issues),也可以直接给我的公众号发消息,我都会回复的
- 微信公众号: **洋洋的小黑屋**
<img src="docs/wechat-qrcode.png" alt="微信公众号" width="200">
## ⚠️ 免责声明
**重要:请在使用前仔细阅读**
1. 本工具仅供**授权的安全测试**和**安全研究**使用
2. 使用者必须确保已获得目标系统的**合法授权**
3. **严禁**将本工具用于未经授权的渗透测试或攻击行为
4. 未经授权扫描他人系统属于**违法行为**,可能面临法律责任
5. 开发者**不对任何滥用行为负责**
使用本工具即表示您同意:
- 仅在合法授权范围内使用
- 遵守所在地区的法律法规
- 承担因滥用产生的一切后果
## 🌟 Star History
如果这个项目对你有帮助,请给一个 ⭐ Star 支持一下!
[![Star History Chart](https://api.star-history.com/svg?repos=yyhuni/xingrin&type=Date)](https://star-history.com/#yyhuni/xingrin&Date)
## 📄 许可证
本项目采用 [GNU General Public License v3.0](LICENSE) 许可证。
### 允许的用途
- ✅ 个人学习和研究
- ✅ 商业和非商业使用
- ✅ 修改和分发
- ✅ 专利使用
- ✅ 私人使用
### 义务和限制
- 📋 **开源义务**:分发时必须提供源代码
- 📋 **相同许可**:衍生作品必须使用相同许可证
- 📋 **版权声明**:必须保留原始版权和许可证声明
-**责任免除**:不提供任何担保
- ❌ 未经授权的渗透测试
- ❌ 任何违法行为

View File

@@ -1 +0,0 @@
v1.1.4

32
agent/go.mod Normal file
View File

@@ -0,0 +1,32 @@
module github.com/yyhuni/orbit/agent
go 1.24.5
require (
github.com/docker/docker v28.5.2+incompatible
github.com/gorilla/websocket v1.5.3
github.com/shirou/gopsutil/v3 v3.24.5
)
require (
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
golang.org/x/sys v0.39.0 // indirect
)

78
agent/go.sum Normal file
View File

@@ -0,0 +1,78 @@
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM=
github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/shirou/gopsutil/v3 v3.23.12/go.mod h1:1FrWgea594Jp7qmjHUUPlJDTPgcsb9mGnXDxavtikzM=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/yusufpapurcu/wmi v1.2.3/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 h1:ssfIgGNANqpVFCndZvcuyKbl0g+UAVcbBcqGkG28H0Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0/go.mod h1:GQ/474YrbE4Jx8gZ4q5I4hrhUzM6UPzyrqJYV2AqPoQ=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,73 +0,0 @@
"""HostPortMapping Service - 业务逻辑层"""
import logging
from typing import List, Iterator
from apps.asset.repositories.asset import DjangoHostPortMappingRepository
from apps.asset.dtos.asset import HostPortMappingDTO
logger = logging.getLogger(__name__)
class HostPortMappingService:
"""主机端口映射服务 - 负责主机端口映射数据的业务逻辑"""
def __init__(self):
self.repo = DjangoHostPortMappingRepository()
def bulk_create_ignore_conflicts(self, items: List[HostPortMappingDTO]) -> int:
"""
批量创建主机端口映射(忽略冲突)
Args:
items: 主机端口映射 DTO 列表
Returns:
int: 实际创建的记录数
Note:
使用数据库唯一约束 + ignore_conflicts 自动去重
"""
try:
logger.debug("Service: 准备批量创建主机端口映射 - 数量: %d", len(items))
created_count = self.repo.bulk_create_ignore_conflicts(items)
logger.info("Service: 主机端口映射创建成功 - 数量: %d", created_count)
return created_count
except Exception as e:
logger.error(
"Service: 批量创建主机端口映射失败 - 数量: %d, 错误: %s",
len(items),
str(e),
exc_info=True
)
raise
def iter_host_port_by_target(self, target_id: int, batch_size: int = 1000):
return self.repo.get_for_export(target_id=target_id, batch_size=batch_size)
def get_ip_aggregation_by_target(self, target_id: int, search: str = None):
return self.repo.get_ip_aggregation_by_target(target_id, search=search)
def get_all_ip_aggregation(self, search: str = None):
"""获取所有 IP 聚合数据(全局查询)"""
return self.repo.get_all_ip_aggregation(search=search)
def iter_ips_by_target(self, target_id: int, batch_size: int = 1000) -> Iterator[str]:
"""流式获取目标下的所有唯一 IP 地址。"""
return self.repo.get_ips_for_export(target_id=target_id, batch_size=batch_size)
def iter_raw_data_for_csv_export(self, target_id: int) -> Iterator[dict]:
"""
流式获取原始数据用于 CSV 导出
Args:
target_id: 目标 ID
Yields:
原始数据字典 {ip, host, port, created_at}
"""
return self.repo.iter_raw_data_for_export(target_id=target_id)

View File

@@ -1,963 +0,0 @@
import logging
from rest_framework import viewsets, status, filters
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.request import Request
from rest_framework.exceptions import NotFound, ValidationError as DRFValidationError
from django.core.exceptions import ValidationError, ObjectDoesNotExist
from django.db import DatabaseError, IntegrityError, OperationalError
from django.http import StreamingHttpResponse
from .serializers import (
SubdomainListSerializer, WebSiteSerializer, DirectorySerializer,
VulnerabilitySerializer, EndpointListSerializer, IPAddressAggregatedSerializer,
SubdomainSnapshotSerializer, WebsiteSnapshotSerializer, DirectorySnapshotSerializer,
EndpointSnapshotSerializer, VulnerabilitySnapshotSerializer
)
from .services import (
SubdomainService, WebSiteService, DirectoryService,
VulnerabilityService, AssetStatisticsService, EndpointService, HostPortMappingService
)
from .services.snapshot import (
SubdomainSnapshotsService, WebsiteSnapshotsService, DirectorySnapshotsService,
EndpointSnapshotsService, HostPortMappingSnapshotsService, VulnerabilitySnapshotsService
)
from apps.common.pagination import BasePagination
logger = logging.getLogger(__name__)
class AssetStatisticsViewSet(viewsets.ViewSet):
"""
资产统计 API
提供仪表盘所需的统计数据(预聚合,读取缓存表)
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = AssetStatisticsService()
def list(self, request):
"""
获取资产统计数据
GET /assets/statistics/
返回:
- totalTargets: 目标总数
- totalSubdomains: 子域名总数
- totalIps: IP 总数
- totalEndpoints: 端点总数
- totalWebsites: 网站总数
- totalVulns: 漏洞总数
- totalAssets: 总资产数
- runningScans: 运行中的扫描数
- updatedAt: 统计更新时间
"""
try:
stats = self.service.get_statistics()
return Response({
'totalTargets': stats['total_targets'],
'totalSubdomains': stats['total_subdomains'],
'totalIps': stats['total_ips'],
'totalEndpoints': stats['total_endpoints'],
'totalWebsites': stats['total_websites'],
'totalVulns': stats['total_vulns'],
'totalAssets': stats['total_assets'],
'runningScans': stats['running_scans'],
'updatedAt': stats['updated_at'],
# 变化值
'changeTargets': stats['change_targets'],
'changeSubdomains': stats['change_subdomains'],
'changeIps': stats['change_ips'],
'changeEndpoints': stats['change_endpoints'],
'changeWebsites': stats['change_websites'],
'changeVulns': stats['change_vulns'],
'changeAssets': stats['change_assets'],
# 漏洞严重程度分布
'vulnBySeverity': stats['vuln_by_severity'],
})
except (DatabaseError, OperationalError) as e:
logger.exception("获取资产统计数据失败")
return Response(
{'error': '获取统计数据失败'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@action(detail=False, methods=['get'], url_path='history')
def history(self, request: Request):
"""
获取统计历史数据(用于折线图)
GET /assets/statistics/history/?days=7
Query Parameters:
days: 获取最近多少天的数据,默认 7最大 90
Returns:
历史数据列表
"""
try:
days_param = request.query_params.get('days', '7')
try:
days = int(days_param)
except (ValueError, TypeError):
days = 7
days = min(max(days, 1), 90) # 限制在 1-90 天
history = self.service.get_statistics_history(days=days)
return Response(history)
except (DatabaseError, OperationalError) as e:
logger.exception("获取统计历史数据失败")
return Response(
{'error': '获取历史数据失败'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
# 注意IPAddress 模型已被重构为 HostPortMapping
# IPAddressViewSet 已删除,需要根据新架构重新实现
class SubdomainViewSet(viewsets.ModelViewSet):
"""子域名管理 ViewSet
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/subdomains/
2. 独立路由GET /api/subdomains/(全局查询)
"""
serializer_class = SubdomainListSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['name']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = SubdomainService()
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围"""
target_pk = self.kwargs.get('target_pk')
if target_pk:
return self.service.get_subdomains_by_target(target_pk)
return self.service.get_all()
@action(detail=False, methods=['post'], url_path='bulk-create')
def bulk_create(self, request, **kwargs):
"""批量创建子域名
POST /api/targets/{target_pk}/subdomains/bulk-create/
请求体:
{
"subdomains": ["sub1.example.com", "sub2.example.com"]
}
响应:
{
"message": "批量创建完成",
"createdCount": 10,
"skippedCount": 2,
"invalidCount": 1,
"mismatchedCount": 1,
"totalReceived": 14
}
"""
from apps.targets.models import Target
target_pk = self.kwargs.get('target_pk')
if not target_pk:
return Response(
{'error': '必须在目标下批量创建子域名'},
status=status.HTTP_400_BAD_REQUEST
)
# 获取目标
try:
target = Target.objects.get(pk=target_pk)
except Target.DoesNotExist:
return Response(
{'error': '目标不存在'},
status=status.HTTP_404_NOT_FOUND
)
# 验证目标类型必须为域名
if target.type != Target.TargetType.DOMAIN:
return Response(
{'error': '只有域名类型的目标支持导入子域名'},
status=status.HTTP_400_BAD_REQUEST
)
# 获取请求体中的子域名列表
subdomains = request.data.get('subdomains', [])
if not subdomains or not isinstance(subdomains, list):
return Response(
{'error': '请求体不能为空或格式错误'},
status=status.HTTP_400_BAD_REQUEST
)
# 调用 service 层处理
try:
result = self.service.bulk_create_subdomains(
target_id=int(target_pk),
target_name=target.name,
subdomains=subdomains
)
except Exception as e:
logger.exception("批量创建子域名失败")
return Response(
{'error': '服务器内部错误'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
return Response({
'message': '批量创建完成',
'createdCount': result.created_count,
'skippedCount': result.skipped_count,
'invalidCount': result.invalid_count,
'mismatchedCount': result.mismatched_count,
'totalReceived': result.total_received,
}, status=status.HTTP_200_OK)
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出子域名为 CSV 格式
CSV 列name, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
raise DRFValidationError('必须在目标下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(target_id=target_pk)
headers = ['name', 'created_at']
formatters = {'created_at': format_datetime}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-subdomains.csv"'
return response
class WebSiteViewSet(viewsets.ModelViewSet):
"""站点管理 ViewSet
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/websites/
2. 独立路由GET /api/websites/(全局查询)
"""
serializer_class = WebSiteSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['host']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = WebSiteService()
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围"""
target_pk = self.kwargs.get('target_pk')
if target_pk:
return self.service.get_websites_by_target(target_pk)
return self.service.get_all()
@action(detail=False, methods=['post'], url_path='bulk-create')
def bulk_create(self, request, **kwargs):
"""批量创建网站
POST /api/targets/{target_pk}/websites/bulk-create/
请求体:
{
"urls": ["https://example.com", "https://test.com"]
}
响应:
{
"message": "批量创建完成",
"createdCount": 10,
"mismatchedCount": 2
}
"""
from apps.targets.models import Target
target_pk = self.kwargs.get('target_pk')
if not target_pk:
return Response(
{'error': '必须在目标下批量创建网站'},
status=status.HTTP_400_BAD_REQUEST
)
# 获取目标
try:
target = Target.objects.get(pk=target_pk)
except Target.DoesNotExist:
return Response(
{'error': '目标不存在'},
status=status.HTTP_404_NOT_FOUND
)
# 获取请求体中的 URL 列表
urls = request.data.get('urls', [])
if not urls or not isinstance(urls, list):
return Response(
{'error': '请求体不能为空或格式错误'},
status=status.HTTP_400_BAD_REQUEST
)
# 调用 service 层处理
try:
created_count = self.service.bulk_create_urls(
target_id=int(target_pk),
target_name=target.name,
target_type=target.type,
urls=urls
)
except Exception as e:
logger.exception("批量创建网站失败")
return Response(
{'error': '服务器内部错误'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
return Response({
'message': '批量创建完成',
'createdCount': created_count,
}, status=status.HTTP_200_OK)
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出网站为 CSV 格式
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, body_preview, vhost, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
target_pk = self.kwargs.get('target_pk')
if not target_pk:
raise DRFValidationError('必须在目标下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(target_id=target_pk)
headers = [
'url', 'host', 'location', 'title', 'status_code',
'content_length', 'content_type', 'webserver', 'tech',
'body_preview', 'vhost', 'created_at'
]
formatters = {
'created_at': format_datetime,
'tech': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-websites.csv"'
return response
class DirectoryViewSet(viewsets.ModelViewSet):
"""目录管理 ViewSet
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/directories/
2. 独立路由GET /api/directories/(全局查询)
"""
serializer_class = DirectorySerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['url']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = DirectoryService()
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围"""
target_pk = self.kwargs.get('target_pk')
if target_pk:
return self.service.get_directories_by_target(target_pk)
return self.service.get_all()
@action(detail=False, methods=['post'], url_path='bulk-create')
def bulk_create(self, request, **kwargs):
"""批量创建目录
POST /api/targets/{target_pk}/directories/bulk-create/
请求体:
{
"urls": ["https://example.com/admin", "https://example.com/api"]
}
响应:
{
"message": "批量创建完成",
"createdCount": 10,
"mismatchedCount": 2
}
"""
from apps.targets.models import Target
target_pk = self.kwargs.get('target_pk')
if not target_pk:
return Response(
{'error': '必须在目标下批量创建目录'},
status=status.HTTP_400_BAD_REQUEST
)
# 获取目标
try:
target = Target.objects.get(pk=target_pk)
except Target.DoesNotExist:
return Response(
{'error': '目标不存在'},
status=status.HTTP_404_NOT_FOUND
)
# 获取请求体中的 URL 列表
urls = request.data.get('urls', [])
if not urls or not isinstance(urls, list):
return Response(
{'error': '请求体不能为空或格式错误'},
status=status.HTTP_400_BAD_REQUEST
)
# 调用 service 层处理
try:
created_count = self.service.bulk_create_urls(
target_id=int(target_pk),
target_name=target.name,
target_type=target.type,
urls=urls
)
except Exception as e:
logger.exception("批量创建目录失败")
return Response(
{'error': '服务器内部错误'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
return Response({
'message': '批量创建完成',
'createdCount': created_count,
}, status=status.HTTP_200_OK)
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出目录为 CSV 格式
CSV 列url, status, content_length, words, lines, content_type, duration, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
raise DRFValidationError('必须在目标下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(target_id=target_pk)
headers = [
'url', 'status', 'content_length', 'words',
'lines', 'content_type', 'duration', 'created_at'
]
formatters = {
'created_at': format_datetime,
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-directories.csv"'
return response
class EndpointViewSet(viewsets.ModelViewSet):
"""端点管理 ViewSet
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/endpoints/
2. 独立路由GET /api/endpoints/(全局查询)
"""
serializer_class = EndpointListSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['host']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = EndpointService()
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围"""
target_pk = self.kwargs.get('target_pk')
if target_pk:
return self.service.get_endpoints_by_target(target_pk)
return self.service.get_all()
@action(detail=False, methods=['post'], url_path='bulk-create')
def bulk_create(self, request, **kwargs):
"""批量创建端点
POST /api/targets/{target_pk}/endpoints/bulk-create/
请求体:
{
"urls": ["https://example.com/api/v1", "https://example.com/api/v2"]
}
响应:
{
"message": "批量创建完成",
"createdCount": 10,
"mismatchedCount": 2
}
"""
from apps.targets.models import Target
target_pk = self.kwargs.get('target_pk')
if not target_pk:
return Response(
{'error': '必须在目标下批量创建端点'},
status=status.HTTP_400_BAD_REQUEST
)
# 获取目标
try:
target = Target.objects.get(pk=target_pk)
except Target.DoesNotExist:
return Response(
{'error': '目标不存在'},
status=status.HTTP_404_NOT_FOUND
)
# 获取请求体中的 URL 列表
urls = request.data.get('urls', [])
if not urls or not isinstance(urls, list):
return Response(
{'error': '请求体不能为空或格式错误'},
status=status.HTTP_400_BAD_REQUEST
)
# 调用 service 层处理
try:
created_count = self.service.bulk_create_urls(
target_id=int(target_pk),
target_name=target.name,
target_type=target.type,
urls=urls
)
except Exception as e:
logger.exception("批量创建端点失败")
return Response(
{'error': '服务器内部错误'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
return Response({
'message': '批量创建完成',
'createdCount': created_count,
}, status=status.HTTP_200_OK)
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出端点为 CSV 格式
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, body_preview, vhost, matched_gf_patterns, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
target_pk = self.kwargs.get('target_pk')
if not target_pk:
raise DRFValidationError('必须在目标下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(target_id=target_pk)
headers = [
'url', 'host', 'location', 'title', 'status_code',
'content_length', 'content_type', 'webserver', 'tech',
'body_preview', 'vhost', 'matched_gf_patterns', 'created_at'
]
formatters = {
'created_at': format_datetime,
'tech': lambda x: format_list_field(x, separator=','),
'matched_gf_patterns': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-endpoints.csv"'
return response
class HostPortMappingViewSet(viewsets.ModelViewSet):
"""主机端口映射管理 ViewSetIP 地址聚合视图)
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/ip-addresses/
2. 独立路由GET /api/ip-addresses/(全局查询)
返回按 IP 聚合的数据,每个 IP 显示其关联的所有 hosts 和 ports
注意:由于返回的是聚合数据(字典列表),不支持 DRF SearchFilter
"""
serializer_class = IPAddressAggregatedSerializer
pagination_class = BasePagination
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = HostPortMappingService()
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围,返回按 IP 聚合的数据"""
target_pk = self.kwargs.get('target_pk')
search = self.request.query_params.get('search', None)
if target_pk:
return self.service.get_ip_aggregation_by_target(target_pk, search=search)
return self.service.get_all_ip_aggregation(search=search)
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出 IP 地址为 CSV 格式
CSV 列ip, host, port, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
raise DRFValidationError('必须在目标下导出')
# 获取流式数据迭代器
data_iterator = self.service.iter_raw_data_for_csv_export(target_id=target_pk)
# CSV 表头和格式化器
headers = ['ip', 'host', 'port', 'created_at']
formatters = {
'created_at': format_datetime
}
# 生成流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-ip-addresses.csv"'
return response
class VulnerabilityViewSet(viewsets.ModelViewSet):
"""漏洞资产管理 ViewSet只读
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/vulnerabilities/
2. 独立路由GET /api/vulnerabilities/(全局查询)
"""
serializer_class = VulnerabilitySerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['vuln_type']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = VulnerabilityService()
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围"""
target_pk = self.kwargs.get('target_pk')
if target_pk:
return self.service.get_vulnerabilities_by_target(target_pk)
return self.service.get_all()
# ==================== 快照 ViewSetScan 嵌套路由) ====================
class SubdomainSnapshotViewSet(viewsets.ModelViewSet):
"""子域名快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/subdomains/"""
serializer_class = SubdomainSnapshotSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['name']
ordering_fields = ['name', 'created_at']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = SubdomainSnapshotsService()
def get_queryset(self):
scan_pk = self.kwargs.get('scan_pk')
if scan_pk:
return self.service.get_by_scan(scan_pk)
return self.service.get_all()
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出子域名快照为 CSV 格式
CSV 列name, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
raise DRFValidationError('必须在扫描下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(scan_id=scan_pk)
headers = ['name', 'created_at']
formatters = {'created_at': format_datetime}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-subdomains.csv"'
return response
class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
"""网站快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/websites/"""
serializer_class = WebsiteSnapshotSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['host']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = WebsiteSnapshotsService()
def get_queryset(self):
scan_pk = self.kwargs.get('scan_pk')
if scan_pk:
return self.service.get_by_scan(scan_pk)
return self.service.get_all()
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出网站快照为 CSV 格式
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, body_preview, vhost, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
raise DRFValidationError('必须在扫描下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(scan_id=scan_pk)
headers = [
'url', 'host', 'location', 'title', 'status_code',
'content_length', 'content_type', 'webserver', 'tech',
'body_preview', 'vhost', 'created_at'
]
formatters = {
'created_at': format_datetime,
'tech': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-websites.csv"'
return response
class DirectorySnapshotViewSet(viewsets.ModelViewSet):
"""目录快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/directories/"""
serializer_class = DirectorySnapshotSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['url']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = DirectorySnapshotsService()
def get_queryset(self):
scan_pk = self.kwargs.get('scan_pk')
if scan_pk:
return self.service.get_by_scan(scan_pk)
return self.service.get_all()
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出目录快照为 CSV 格式
CSV 列url, status, content_length, words, lines, content_type, duration, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
raise DRFValidationError('必须在扫描下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(scan_id=scan_pk)
headers = [
'url', 'status', 'content_length', 'words',
'lines', 'content_type', 'duration', 'created_at'
]
formatters = {
'created_at': format_datetime,
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-directories.csv"'
return response
class EndpointSnapshotViewSet(viewsets.ModelViewSet):
"""端点快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/endpoints/"""
serializer_class = EndpointSnapshotSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['host']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = EndpointSnapshotsService()
def get_queryset(self):
scan_pk = self.kwargs.get('scan_pk')
if scan_pk:
return self.service.get_by_scan(scan_pk)
return self.service.get_all()
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出端点快照为 CSV 格式
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, body_preview, vhost, matched_gf_patterns, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
raise DRFValidationError('必须在扫描下导出')
data_iterator = self.service.iter_raw_data_for_csv_export(scan_id=scan_pk)
headers = [
'url', 'host', 'location', 'title', 'status_code',
'content_length', 'content_type', 'webserver', 'tech',
'body_preview', 'vhost', 'matched_gf_patterns', 'created_at'
]
formatters = {
'created_at': format_datetime,
'tech': lambda x: format_list_field(x, separator=','),
'matched_gf_patterns': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-endpoints.csv"'
return response
class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
"""主机端口映射快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/ip-addresses/
注意:由于返回的是聚合数据(字典列表),不支持 DRF SearchFilter
"""
serializer_class = IPAddressAggregatedSerializer
pagination_class = BasePagination
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = HostPortMappingSnapshotsService()
def get_queryset(self):
scan_pk = self.kwargs.get('scan_pk')
search = self.request.query_params.get('search', None)
if scan_pk:
return self.service.get_ip_aggregation_by_scan(scan_pk, search=search)
return self.service.get_all_ip_aggregation(search=search)
@action(detail=False, methods=['get'], url_path='export')
def export(self, request, **kwargs):
"""导出 IP 地址为 CSV 格式
CSV 列ip, host, port, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
raise DRFValidationError('必须在扫描下导出')
# 获取流式数据迭代器
data_iterator = self.service.iter_raw_data_for_csv_export(scan_id=scan_pk)
# CSV 表头和格式化器
headers = ['ip', 'host', 'port', 'created_at']
formatters = {
'created_at': format_datetime
}
# 生成流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-ip-addresses.csv"'
return response
class VulnerabilitySnapshotViewSet(viewsets.ModelViewSet):
"""漏洞快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/vulnerabilities/"""
serializer_class = VulnerabilitySnapshotSerializer
pagination_class = BasePagination
filter_backends = [filters.SearchFilter, filters.OrderingFilter]
search_fields = ['vuln_type']
ordering = ['-created_at']
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = VulnerabilitySnapshotsService()
def get_queryset(self):
scan_pk = self.kwargs.get('scan_pk')
if scan_pk:
return self.service.get_by_scan(scan_pk)
return self.service.get_all()

View File

@@ -1,10 +0,0 @@
"""
通用服务模块
提供系统级别的公共服务,包括:
- SystemLogService: 系统日志读取服务
"""
from .system_log_service import SystemLogService
__all__ = ['SystemLogService']

View File

@@ -1,69 +0,0 @@
"""
系统日志服务模块
提供系统日志的读取功能,支持:
- 从日志目录读取日志文件
- 限制返回行数,防止内存溢出
"""
import logging
import subprocess
logger = logging.getLogger(__name__)
class SystemLogService:
"""
系统日志服务类
负责读取系统日志文件,支持从容器内路径或宿主机挂载路径读取日志。
"""
def __init__(self):
# 日志文件路径(容器内路径,通过 volume 挂载到宿主机 /opt/xingrin/logs
self.log_file = "/app/backend/logs/xingrin.log"
self.default_lines = 200 # 默认返回行数
self.max_lines = 10000 # 最大返回行数限制
self.timeout_seconds = 3 # tail 命令超时时间
def get_logs_content(self, lines: int | None = None) -> str:
"""
获取系统日志内容
Args:
lines: 返回的日志行数,默认 200 行,最大 10000 行
Returns:
str: 日志内容,每行以换行符分隔,保持原始顺序
"""
# 参数校验和默认值处理
if lines is None:
lines = self.default_lines
lines = int(lines)
if lines < 1:
lines = 1
if lines > self.max_lines:
lines = self.max_lines
# 使用 tail 命令读取日志文件末尾内容
cmd = ["tail", "-n", str(lines), self.log_file]
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=self.timeout_seconds,
check=False,
)
if result.returncode != 0:
logger.warning(
"tail command failed: returncode=%s stderr=%s",
result.returncode,
(result.stderr or "").strip(),
)
# 直接返回原始内容,保持文件中的顺序
return result.stdout or ""

View File

@@ -1,21 +0,0 @@
"""
通用模块 URL 配置
路由说明:
- /api/auth/* 认证相关接口(登录、登出、用户信息)
- /api/system/* 系统管理接口(日志查看等)
"""
from django.urls import path
from .views import LoginView, LogoutView, MeView, ChangePasswordView, SystemLogsView
urlpatterns = [
# 认证相关
path('auth/login/', LoginView.as_view(), name='auth-login'),
path('auth/logout/', LogoutView.as_view(), name='auth-logout'),
path('auth/me/', MeView.as_view(), name='auth-me'),
path('auth/change-password/', ChangePasswordView.as_view(), name='auth-change-password'),
# 系统管理
path('system/logs/', SystemLogsView.as_view(), name='system-logs'),
]

View File

@@ -1,116 +0,0 @@
"""CSV 导出工具模块
提供流式 CSV 生成功能,支持:
- UTF-8 BOMExcel 兼容)
- RFC 4180 规范转义
- 流式生成(内存友好)
"""
import csv
import io
from datetime import datetime
from typing import Iterator, Dict, Any, List, Callable, Optional
# UTF-8 BOM确保 Excel 正确识别编码
UTF8_BOM = '\ufeff'
def generate_csv_rows(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
field_formatters: Optional[Dict[str, Callable]] = None
) -> Iterator[str]:
"""
流式生成 CSV 行
Args:
data_iterator: 数据迭代器,每个元素是一个字典
headers: CSV 表头列表
field_formatters: 字段格式化函数字典key 为字段名value 为格式化函数
Yields:
CSV 行字符串(包含换行符)
Example:
>>> data = [{'ip': '192.168.1.1', 'hosts': ['a.com', 'b.com']}]
>>> headers = ['ip', 'hosts']
>>> formatters = {'hosts': format_list_field}
>>> for row in generate_csv_rows(iter(data), headers, formatters):
... print(row, end='')
"""
# 输出 BOM + 表头
output = io.StringIO()
writer = csv.writer(output, quoting=csv.QUOTE_MINIMAL)
writer.writerow(headers)
yield UTF8_BOM + output.getvalue()
# 输出数据行
for row_data in data_iterator:
output = io.StringIO()
writer = csv.writer(output, quoting=csv.QUOTE_MINIMAL)
row = []
for header in headers:
value = row_data.get(header, '')
if field_formatters and header in field_formatters:
value = field_formatters[header](value)
row.append(value if value is not None else '')
writer.writerow(row)
yield output.getvalue()
def format_list_field(values: List, separator: str = ';') -> str:
"""
将列表字段格式化为分号分隔的字符串
Args:
values: 值列表
separator: 分隔符,默认为分号
Returns:
分隔符连接的字符串
Example:
>>> format_list_field(['a.com', 'b.com'])
'a.com;b.com'
>>> format_list_field([80, 443])
'80;443'
>>> format_list_field([])
''
>>> format_list_field(None)
''
"""
if not values:
return ''
return separator.join(str(v) for v in values)
def format_datetime(dt: Optional[datetime]) -> str:
"""
格式化日期时间为字符串(转换为本地时区)
Args:
dt: datetime 对象或 None
Returns:
格式化的日期时间字符串,格式为 YYYY-MM-DD HH:MM:SS本地时区
Example:
>>> from datetime import datetime
>>> format_datetime(datetime(2024, 1, 15, 10, 30, 0))
'2024-01-15 10:30:00'
>>> format_datetime(None)
''
"""
if dt is None:
return ''
if isinstance(dt, str):
return dt
# 转换为本地时区(从 Django settings 获取)
from django.utils import timezone
if timezone.is_aware(dt):
dt = timezone.localtime(dt)
return dt.strftime('%Y-%m-%d %H:%M:%S')

View File

@@ -1,12 +0,0 @@
"""
通用模块视图导出
包含:
- 认证相关视图:登录、登出、用户信息、修改密码
- 系统日志视图:实时日志查看
"""
from .auth_views import LoginView, LogoutView, MeView, ChangePasswordView
from .system_log_views import SystemLogsView
__all__ = ['LoginView', 'LogoutView', 'MeView', 'ChangePasswordView', 'SystemLogsView']

View File

@@ -1,69 +0,0 @@
"""
系统日志视图模块
提供系统日志的 REST API 接口,供前端实时查看系统运行日志。
"""
import logging
from django.utils.decorators import method_decorator
from django.views.decorators.csrf import csrf_exempt
from rest_framework import status
from rest_framework.permissions import AllowAny
from rest_framework.response import Response
from rest_framework.views import APIView
from apps.common.services.system_log_service import SystemLogService
logger = logging.getLogger(__name__)
@method_decorator(csrf_exempt, name="dispatch")
class SystemLogsView(APIView):
"""
系统日志 API 视图
GET /api/system/logs/
获取系统日志内容
Query Parameters:
lines (int, optional): 返回的日志行数,默认 200最大 10000
Response:
{
"content": "日志内容字符串..."
}
Note:
- 当前为开发阶段,暂时允许匿名访问
- 生产环境应添加管理员权限验证
"""
# TODO: 生产环境应改为 IsAdminUser 权限
authentication_classes = []
permission_classes = [AllowAny]
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = SystemLogService()
def get(self, request):
"""
获取系统日志
支持通过 lines 参数控制返回行数,用于前端分页或实时刷新场景。
"""
try:
# 解析 lines 参数
lines_raw = request.query_params.get("lines")
lines = int(lines_raw) if lines_raw is not None else None
# 调用服务获取日志内容
content = self.service.get_logs_content(lines=lines)
return Response({"content": content})
except ValueError:
return Response({"error": "lines 参数必须是整数"}, status=status.HTTP_400_BAD_REQUEST)
except Exception:
logger.exception("获取系统日志失败")
return Response({"error": "获取系统日志失败"}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)

View File

@@ -1,22 +0,0 @@
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import (
ScanEngineViewSet,
WorkerNodeViewSet,
WordlistViewSet,
NucleiTemplateRepoViewSet,
)
# 创建路由器
router = DefaultRouter()
router.register(r"engines", ScanEngineViewSet, basename="engine")
router.register(r"workers", WorkerNodeViewSet, basename="worker")
router.register(r"wordlists", WordlistViewSet, basename="wordlist")
router.register(r"nuclei/repos", NucleiTemplateRepoViewSet, basename="nuclei-repos")
urlpatterns = [
path("", include(router.urls)),
]

View File

@@ -1,698 +0,0 @@
"""
目录扫描 Flow
负责编排目录扫描的完整流程
架构:
- Flow 负责编排多个原子 Task
- 支持并发执行扫描工具(使用 ThreadPoolTaskRunner
- 每个 Task 可独立重试
- 配置由 YAML 解析
"""
# Django 环境初始化(导入即生效)
from apps.common.prefect_django_setup import setup_django_for_prefect
from prefect import flow
from prefect.task_runners import ThreadPoolTaskRunner
import hashlib
import logging
import os
import subprocess
from datetime import datetime
from pathlib import Path
from typing import List, Tuple
from apps.scan.tasks.directory_scan import (
export_sites_task,
run_and_stream_save_directories_task
)
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command, ensure_wordlist_local
logger = logging.getLogger(__name__)
# 默认最大并发数
DEFAULT_MAX_WORKERS = 5
def calculate_directory_scan_timeout(
tool_config: dict,
base_per_word: float = 1.0,
min_timeout: int = 60,
max_timeout: int = 7200
) -> int:
"""
根据字典行数计算目录扫描超时时间
计算公式:超时时间 = 字典行数 × 每个单词基础时间
超时范围60秒 ~ 2小时7200秒
Args:
tool_config: 工具配置字典,包含 wordlist 路径
base_per_word: 每个单词的基础时间(秒),默认 1.0秒
min_timeout: 最小超时时间(秒),默认 60秒
max_timeout: 最大超时时间(秒),默认 7200秒2小时
Returns:
int: 计算出的超时时间范围60 ~ 7200
Example:
# 1000行字典 × 1.0秒 = 1000秒 → 限制为7200秒中的 1000秒
# 10000行字典 × 1.0秒 = 10000秒 → 限制为7200秒最大值
timeout = calculate_directory_scan_timeout(
tool_config={'wordlist': '/path/to/wordlist.txt'}
)
"""
try:
# 从 tool_config 中获取 wordlist 路径
wordlist_path = tool_config.get('wordlist')
if not wordlist_path:
logger.warning("工具配置中未指定 wordlist使用默认超时: %d", min_timeout)
return min_timeout
# 展开用户目录(~
wordlist_path = os.path.expanduser(wordlist_path)
# 检查文件是否存在
if not os.path.exists(wordlist_path):
logger.warning("字典文件不存在: %s,使用默认超时: %d", wordlist_path, min_timeout)
return min_timeout
# 使用 wc -l 快速统计字典行数
result = subprocess.run(
['wc', '-l', wordlist_path],
capture_output=True,
text=True,
check=True
)
# wc -l 输出格式:行数 + 空格 + 文件名
line_count = int(result.stdout.strip().split()[0])
# 计算超时时间
timeout = int(line_count * base_per_word)
# 设置合理的下限(不再设置上限)
timeout = max(min_timeout, timeout)
logger.info(
"目录扫描超时计算 - 字典: %s, 行数: %d, 基础时间: %.3f秒/词, 计算超时: %d",
wordlist_path, line_count, base_per_word, timeout
)
return timeout
except subprocess.CalledProcessError as e:
logger.error("统计字典行数失败: %s", e)
# 失败时返回默认超时
return min_timeout
except (ValueError, IndexError) as e:
logger.error("解析字典行数失败: %s", e)
return min_timeout
except Exception as e:
logger.error("计算超时时间异常: %s", e)
return min_timeout
def _get_max_workers(tool_config: dict, default: int = DEFAULT_MAX_WORKERS) -> int:
"""
从单个工具配置中获取 max_workers 参数
Args:
tool_config: 单个工具的配置字典,如 {'max_workers': 10, 'threads': 5, ...}
default: 默认值,默认为 5
Returns:
int: max_workers 值
"""
if not isinstance(tool_config, dict):
return default
# 支持 max_workers 和 max-workersYAML 中划线会被转换)
max_workers = tool_config.get('max_workers') or tool_config.get('max-workers')
if max_workers is not None and isinstance(max_workers, int) and max_workers > 0:
return max_workers
return default
def _setup_directory_scan_directory(scan_workspace_dir: str) -> Path:
"""
创建并验证目录扫描工作目录
Args:
scan_workspace_dir: 扫描工作空间目录
Returns:
Path: 目录扫描目录路径
Raises:
RuntimeError: 目录创建或验证失败
"""
directory_scan_dir = Path(scan_workspace_dir) / 'directory_scan'
directory_scan_dir.mkdir(parents=True, exist_ok=True)
if not directory_scan_dir.is_dir():
raise RuntimeError(f"目录扫描目录创建失败: {directory_scan_dir}")
if not os.access(directory_scan_dir, os.W_OK):
raise RuntimeError(f"目录扫描目录不可写: {directory_scan_dir}")
return directory_scan_dir
def _export_site_urls(target_id: int, target_name: str, directory_scan_dir: Path) -> tuple[str, int]:
"""
导出目标下的所有站点 URL 到文件(支持懒加载)
Args:
target_id: 目标 ID
target_name: 目标名称(用于懒加载创建默认站点)
directory_scan_dir: 目录扫描目录
Returns:
tuple: (sites_file, site_count)
Raises:
ValueError: 站点数量为 0
"""
logger.info("Step 1: 导出目标的所有站点 URL")
sites_file = str(directory_scan_dir / 'sites.txt')
export_result = export_sites_task(
target_id=target_id,
output_file=sites_file,
batch_size=1000, # 每次读取 1000 条,优化内存占用
target_name=target_name # 传入 target_name 用于懒加载
)
site_count = export_result['total_count']
logger.info(
"✓ 站点 URL 导出完成 - 文件: %s, 数量: %d",
export_result['output_file'],
site_count
)
if site_count == 0:
logger.warning("目标下没有站点,无法执行目录扫描")
# 不抛出异常,由上层决定如何处理
# raise ValueError("目标下没有站点,无法执行目录扫描")
return export_result['output_file'], site_count
def _run_scans_sequentially(
enabled_tools: dict,
sites_file: str,
directory_scan_dir: Path,
scan_id: int,
target_id: int,
site_count: int,
target_name: str
) -> tuple[int, int, list]:
"""
串行执行目录扫描任务(支持多工具)- 已废弃,保留用于兼容
Args:
enabled_tools: 启用的工具配置字典
sites_file: 站点文件路径
directory_scan_dir: 目录扫描目录
scan_id: 扫描任务 ID
target_id: 目标 ID
site_count: 站点数量
target_name: 目标名称(用于错误日志)
Returns:
tuple: (total_directories, processed_sites, failed_sites)
"""
# 读取站点列表
sites = []
with open(sites_file, 'r', encoding='utf-8') as f:
for line in f:
site_url = line.strip()
if site_url:
sites.append(site_url)
logger.info("准备扫描 %d 个站点,使用工具: %s", len(sites), ', '.join(enabled_tools.keys()))
total_directories = 0
processed_sites_set = set() # 使用 set 避免重复计数
failed_sites = []
# 遍历每个工具
for tool_name, tool_config in enabled_tools.items():
logger.info("="*60)
logger.info("使用工具: %s", tool_name)
logger.info("="*60)
# 如果配置了 wordlist_name则先确保本地存在对应的字典文件含 hash 校验)
wordlist_name = tool_config.get('wordlist_name')
if wordlist_name:
try:
local_wordlist_path = ensure_wordlist_local(wordlist_name)
tool_config['wordlist'] = local_wordlist_path
except Exception as exc:
logger.error("为工具 %s 准备字典失败: %s", tool_name, exc)
# 当前工具无法执行,将所有站点视为失败,继续下一个工具
failed_sites.extend(sites)
continue
# 逐个站点执行扫描
for idx, site_url in enumerate(sites, 1):
logger.info(
"[%d/%d] 开始扫描站点: %s (工具: %s)",
idx, len(sites), site_url, tool_name
)
# 使用统一的命令构建器
try:
command = build_scan_command(
tool_name=tool_name,
scan_type='directory_scan',
command_params={
'url': site_url
},
tool_config=tool_config
)
except Exception as e:
logger.error(
"✗ [%d/%d] 构建 %s 命令失败: %s - 站点: %s",
idx, len(sites), tool_name, e, site_url
)
failed_sites.append(site_url)
continue
# 单个站点超时:从配置中获取(支持 'auto' 动态计算)
# ffuf 逐个站点扫描timeout 就是单个站点的超时时间
site_timeout = tool_config.get('timeout', 300)
if site_timeout == 'auto':
# 动态计算超时时间(基于字典行数)
site_timeout = calculate_directory_scan_timeout(tool_config)
logger.info(f"✓ 工具 {tool_name} 动态计算 timeout: {site_timeout}")
# 生成日志文件路径
from datetime import datetime
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
log_file = directory_scan_dir / f"{tool_name}_{timestamp}_{idx}.log"
try:
# 直接调用 task串行执行
result = run_and_stream_save_directories_task(
cmd=command,
tool_name=tool_name, # 新增:工具名称
scan_id=scan_id,
target_id=target_id,
site_url=site_url,
cwd=str(directory_scan_dir),
shell=True,
batch_size=1000,
timeout=site_timeout,
log_file=str(log_file) # 新增:日志文件路径
)
total_directories += result.get('created_directories', 0)
processed_sites_set.add(site_url) # 使用 set 记录成功的站点
logger.info(
"✓ [%d/%d] 站点扫描完成: %s - 发现 %d 个目录",
idx, len(sites), site_url,
result.get('created_directories', 0)
)
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
failed_sites.append(site_url)
logger.warning(
"⚠️ [%d/%d] 站点扫描超时: %s - 超时配置: %d\n"
"注意:超时前已解析的目录数据已保存到数据库,但扫描未完全完成。",
idx, len(sites), site_url, site_timeout
)
except Exception as exc:
# 其他异常
failed_sites.append(site_url)
logger.error(
"✗ [%d/%d] 站点扫描失败: %s - 错误: %s",
idx, len(sites), site_url, exc
)
# 每 10 个站点输出进度
if idx % 10 == 0:
logger.info(
"进度: %d/%d (%.1f%%) - 已发现 %d 个目录",
idx, len(sites), idx/len(sites)*100, total_directories
)
# 计算成功和失败的站点数
processed_count = len(processed_sites_set)
if failed_sites:
logger.warning(
"部分站点扫描失败: %d/%d",
len(failed_sites), len(sites)
)
logger.info(
"✓ 串行目录扫描执行完成 - 成功: %d/%d, 失败: %d, 总目录数: %d",
processed_count, len(sites), len(failed_sites), total_directories
)
return total_directories, processed_count, failed_sites
def _generate_log_filename(tool_name: str, site_url: str, directory_scan_dir: Path) -> Path:
"""
生成唯一的日志文件名
使用 URL 的 hash 确保并发时不会冲突
Args:
tool_name: 工具名称
site_url: 站点 URL
directory_scan_dir: 目录扫描目录
Returns:
Path: 日志文件路径
"""
url_hash = hashlib.md5(site_url.encode()).hexdigest()[:8]
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S_%f')
return directory_scan_dir / f"{tool_name}_{url_hash}_{timestamp}.log"
def _run_scans_concurrently(
enabled_tools: dict,
sites_file: str,
directory_scan_dir: Path,
scan_id: int,
target_id: int,
site_count: int,
target_name: str
) -> Tuple[int, int, List[str]]:
"""
并发执行目录扫描任务(使用 ThreadPoolTaskRunner
Args:
enabled_tools: 启用的工具配置字典
sites_file: 站点文件路径
directory_scan_dir: 目录扫描目录
scan_id: 扫描任务 ID
target_id: 目标 ID
site_count: 站点数量
target_name: 目标名称(用于错误日志)
Returns:
tuple: (total_directories, processed_sites, failed_sites)
"""
# 读取站点列表
sites: List[str] = []
with open(sites_file, 'r', encoding='utf-8') as f:
for line in f:
site_url = line.strip()
if site_url:
sites.append(site_url)
if not sites:
logger.warning("站点列表为空")
return 0, 0, []
logger.info(
"准备并发扫描 %d 个站点,使用工具: %s",
len(sites), ', '.join(enabled_tools.keys())
)
total_directories = 0
processed_sites_count = 0
failed_sites: List[str] = []
# 遍历每个工具
for tool_name, tool_config in enabled_tools.items():
# 每个工具独立获取 max_workers 配置
max_workers = _get_max_workers(tool_config)
logger.info("="*60)
logger.info("使用工具: %s (并发模式, max_workers=%d)", tool_name, max_workers)
logger.info("="*60)
# 如果配置了 wordlist_name则先确保本地存在对应的字典文件含 hash 校验)
wordlist_name = tool_config.get('wordlist_name')
if wordlist_name:
try:
local_wordlist_path = ensure_wordlist_local(wordlist_name)
tool_config['wordlist'] = local_wordlist_path
except Exception as exc:
logger.error("为工具 %s 准备字典失败: %s", tool_name, exc)
# 当前工具无法执行,将所有站点视为失败,继续下一个工具
failed_sites.extend(sites)
continue
# 计算超时时间(所有站点共用)
site_timeout = tool_config.get('timeout', 300)
if site_timeout == 'auto':
site_timeout = calculate_directory_scan_timeout(tool_config)
logger.info(f"✓ 工具 {tool_name} 动态计算 timeout: {site_timeout}")
# 准备所有站点的扫描参数
scan_params_list = []
for idx, site_url in enumerate(sites, 1):
try:
command = build_scan_command(
tool_name=tool_name,
scan_type='directory_scan',
command_params={'url': site_url},
tool_config=tool_config
)
log_file = _generate_log_filename(tool_name, site_url, directory_scan_dir)
scan_params_list.append({
'idx': idx,
'site_url': site_url,
'command': command,
'log_file': str(log_file),
'timeout': site_timeout
})
except Exception as e:
logger.error(
"✗ [%d/%d] 构建 %s 命令失败: %s - 站点: %s",
idx, len(sites), tool_name, e, site_url
)
failed_sites.append(site_url)
if not scan_params_list:
logger.warning("没有有效的扫描任务")
continue
# 使用 ThreadPoolTaskRunner 并发执行
logger.info("开始并发提交 %d 个扫描任务...", len(scan_params_list))
with ThreadPoolTaskRunner(max_workers=max_workers) as task_runner:
# 提交所有任务
futures = []
for params in scan_params_list:
future = run_and_stream_save_directories_task.submit(
cmd=params['command'],
tool_name=tool_name,
scan_id=scan_id,
target_id=target_id,
site_url=params['site_url'],
cwd=str(directory_scan_dir),
shell=True,
batch_size=1000,
timeout=params['timeout'],
log_file=params['log_file']
)
futures.append((params['idx'], params['site_url'], future))
logger.info("✓ 已提交 %d 个扫描任务,等待完成...", len(futures))
# 等待所有任务完成并聚合结果
for idx, site_url, future in futures:
try:
result = future.result()
directories_found = result.get('created_directories', 0)
total_directories += directories_found
processed_sites_count += 1
logger.info(
"✓ [%d/%d] 站点扫描完成: %s - 发现 %d 个目录",
idx, len(sites), site_url, directories_found
)
except Exception as exc:
failed_sites.append(site_url)
# 判断是否为超时异常
if 'timeout' in str(exc).lower() or isinstance(exc, subprocess.TimeoutExpired):
logger.warning(
"⚠️ [%d/%d] 站点扫描超时: %s - 错误: %s",
idx, len(sites), site_url, exc
)
else:
logger.error(
"✗ [%d/%d] 站点扫描失败: %s - 错误: %s",
idx, len(sites), site_url, exc
)
# 输出汇总信息
if failed_sites:
logger.warning(
"部分站点扫描失败: %d/%d",
len(failed_sites), len(sites)
)
logger.info(
"✓ 并发目录扫描执行完成 - 成功: %d/%d, 失败: %d, 总目录数: %d",
processed_sites_count, len(sites), len(failed_sites), total_directories
)
return total_directories, processed_sites_count, failed_sites
@flow(
name="directory_scan",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def directory_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: dict
) -> dict:
"""
目录扫描 Flow
主要功能:
1. 从 target 获取所有站点的 URL
2. 对每个站点 URL 执行目录扫描(支持 ffuf 等工具)
3. 流式保存扫描结果到数据库 Directory 表
工作流程:
Step 0: 创建工作目录
Step 1: 导出站点 URL 列表到文件(供扫描工具使用)
Step 2: 验证工具配置
Step 3: 并发执行扫描工具并实时保存结果(使用 ThreadPoolTaskRunner
ffuf 输出字段:
- url: 发现的目录/文件 URL
- length: 响应内容长度
- status: HTTP 状态码
- words: 响应内容单词数
- lines: 响应内容行数
- content_type: 内容类型
- duration: 请求耗时
Args:
scan_id: 扫描任务 ID
target_name: 目标名称
target_id: 目标 ID
scan_workspace_dir: 扫描工作空间目录
enabled_tools: 启用的工具配置字典
Returns:
dict: {
'success': bool,
'scan_id': int,
'target': str,
'scan_workspace_dir': str,
'sites_file': str,
'site_count': int,
'total_directories': int, # 发现的总目录数
'processed_sites': int, # 成功处理的站点数
'failed_sites_count': int, # 失败的站点数
'executed_tasks': list
}
Raises:
ValueError: 参数错误
RuntimeError: 执行失败
"""
try:
logger.info(
"="*60 + "\n" +
"开始目录扫描\n" +
f" Scan ID: {scan_id}\n" +
f" Target: {target_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
# Step 0: 创建工作目录
directory_scan_dir = _setup_directory_scan_directory(scan_workspace_dir)
# Step 1: 导出站点 URL支持懒加载
sites_file, site_count = _export_site_urls(target_id, target_name, directory_scan_dir)
if site_count == 0:
logger.warning("目标下没有站点,跳过目录扫描")
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'sites_file': sites_file,
'site_count': 0,
'total_directories': 0,
'processed_sites': 0,
'failed_sites_count': 0,
'executed_tasks': ['export_sites']
}
# Step 2: 工具配置信息
logger.info("Step 2: 工具配置信息")
tool_info = []
for tool_name, tool_config in enabled_tools.items():
mw = _get_max_workers(tool_config)
tool_info.append(f"{tool_name}(max_workers={mw})")
logger.info("✓ 启用工具: %s", ', '.join(tool_info))
# Step 3: 并发执行扫描工具并实时保存结果
logger.info("Step 3: 并发执行扫描工具并实时保存结果")
total_directories, processed_sites, failed_sites = _run_scans_concurrently(
enabled_tools=enabled_tools,
sites_file=sites_file,
directory_scan_dir=directory_scan_dir,
scan_id=scan_id,
target_id=target_id,
site_count=site_count,
target_name=target_name
)
# 检查是否所有站点都失败
if processed_sites == 0 and site_count > 0:
logger.warning("所有站点扫描均失败 - 总站点数: %d, 失败数: %d", site_count, len(failed_sites))
# 不抛出异常,让扫描继续
logger.info("="*60 + "\n✓ 目录扫描完成\n" + "="*60)
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'sites_file': sites_file,
'site_count': site_count,
'total_directories': total_directories,
'processed_sites': processed_sites,
'failed_sites_count': len(failed_sites),
'executed_tasks': ['export_sites', 'run_and_stream_save_directories']
}
except Exception as e:
logger.exception("目录扫描失败: %s", e)
raise

View File

@@ -1,279 +0,0 @@
"""
扫描初始化 Flow
负责编排扫描任务的初始化流程
职责:
- 使用 FlowOrchestrator 解析 YAML 配置
- 在 Prefect Flow 中执行子 FlowSubflow
- 按照 YAML 顺序编排工作流
- 不包含具体业务逻辑(由 Tasks 和 FlowOrchestrator 实现)
架构:
- Flow: Prefect 编排层(本文件)
- FlowOrchestrator: 配置解析和执行计划apps/scan/services/
- Tasks: 执行层apps/scan/tasks/
- Handlers: 状态管理apps/scan/handlers/
"""
# Django 环境初始化(导入即生效)
# 注意:动态扫描容器应使用 run_initiate_scan.py 启动,以便在导入前设置环境变量
from apps.common.prefect_django_setup import setup_django_for_prefect
from prefect import flow, task
from pathlib import Path
import logging
from apps.scan.handlers import (
on_initiate_scan_flow_running,
on_initiate_scan_flow_completed,
on_initiate_scan_flow_failed,
)
from prefect.futures import wait
from apps.scan.tasks.workspace_tasks import create_scan_workspace_task
from apps.scan.orchestrators import FlowOrchestrator
logger = logging.getLogger(__name__)
@task(name="run_subflow")
def _run_subflow_task(scan_type: str, flow_func, flow_kwargs: dict):
"""包装子 Flow 的 Task用于在并行阶段并发执行子 Flow。"""
logger.info("开始执行子 Flow: %s", scan_type)
return flow_func(**flow_kwargs)
@flow(
name='initiate_scan',
description='扫描任务初始化流程',
log_prints=True,
on_running=[on_initiate_scan_flow_running],
on_completion=[on_initiate_scan_flow_completed],
on_failure=[on_initiate_scan_flow_failed],
)
def initiate_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
engine_name: str,
scheduled_scan_name: str | None = None,
) -> dict:
"""
初始化扫描任务(动态工作流编排)
根据 YAML 配置动态编排工作流:
- 从数据库获取 engine_config (YAML)
- 检测启用的扫描类型
- 按照定义的阶段执行:
Stage 1: Discovery (顺序执行)
- subdomain_discovery
- port_scan
- site_scan
Stage 2: Analysis (并行执行)
- url_fetch
- directory_scan
Args:
scan_id: 扫描任务 ID
target_name: 目标名称
target_id: 目标 ID
scan_workspace_dir: Scan 工作空间目录路径
engine_name: 引擎名称(用于显示)
scheduled_scan_name: 定时扫描任务名称(可选,用于通知显示)
Returns:
dict: 执行结果摘要
Raises:
ValueError: 参数验证失败或配置无效
RuntimeError: 执行失败
"""
try:
# ==================== 参数验证 ====================
if not scan_id:
raise ValueError("scan_id is required")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir is required")
if not engine_name:
raise ValueError("engine_name is required")
logger.info(
"="*60 + "\n" +
"开始初始化扫描任务\n" +
f" Scan ID: {scan_id}\n" +
f" Target: {target_name}\n" +
f" Engine: {engine_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
# ==================== Task 1: 创建 Scan 工作空间 ====================
scan_workspace_path = create_scan_workspace_task(scan_workspace_dir)
# ==================== Task 2: 获取引擎配置 ====================
from apps.scan.models import Scan
scan = Scan.objects.select_related('engine').get(id=scan_id)
engine_config = scan.engine.configuration
# ==================== Task 3: 解析配置,生成执行计划 ====================
orchestrator = FlowOrchestrator(engine_config)
# FlowOrchestrator 已经解析了所有工具配置
enabled_tools_by_type = orchestrator.enabled_tools_by_type
logger.info(
f"执行计划生成成功:\n"
f" 扫描类型: {''.join(orchestrator.scan_types)}\n"
f" 总共 {len(orchestrator.scan_types)} 个 Flow"
)
# ==================== 初始化阶段进度 ====================
# 在解析完配置后立即初始化,此时已有完整的 scan_types 列表
from apps.scan.services import ScanService
scan_service = ScanService()
scan_service.init_stage_progress(scan_id, orchestrator.scan_types)
logger.info(f"✓ 初始化阶段进度 - Stages: {orchestrator.scan_types}")
# ==================== 更新 Target 最后扫描时间 ====================
# 在开始扫描时更新,表示"最后一次扫描开始时间"
from apps.targets.services import TargetService
target_service = TargetService()
target_service.update_last_scanned_at(target_id)
logger.info(f"✓ 更新 Target 最后扫描时间 - Target ID: {target_id}")
# ==================== Task 3: 执行 Flow动态阶段执行====================
# 注意:各阶段状态更新由 scan_flow_handlers.py 自动处理running/completed/failed
executed_flows = []
results = {}
# 通用执行参数
flow_kwargs = {
'scan_id': scan_id,
'target_name': target_name,
'target_id': target_id,
'scan_workspace_dir': str(scan_workspace_path)
}
def record_flow_result(scan_type, result=None, error=None):
"""
统一的结果记录函数
Args:
scan_type: 扫描类型名称
result: 执行结果(成功时)
error: 异常对象(失败时)
"""
if error:
# 失败处理:记录错误但不抛出异常,让扫描继续执行后续阶段
error_msg = f"{scan_type} 执行失败: {str(error)}"
logger.warning(error_msg)
executed_flows.append(f"{scan_type} (失败)")
results[scan_type] = {'success': False, 'error': str(error)}
# 不再抛出异常,让扫描继续
else:
# 成功处理
executed_flows.append(scan_type)
results[scan_type] = result
logger.info(f"{scan_type} 执行成功")
def get_valid_flows(flow_names):
"""
获取有效的 Flow 函数列表,并为每个 Flow 准备专属参数
Args:
flow_names: 扫描类型名称列表
Returns:
list: [(scan_type, flow_func, flow_specific_kwargs), ...] 有效的函数列表
"""
valid_flows = []
for scan_type in flow_names:
flow_func = orchestrator.get_flow_function(scan_type)
if flow_func:
# 为每个 Flow 准备专属的参数(包含对应的 enabled_tools
flow_specific_kwargs = dict(flow_kwargs)
flow_specific_kwargs['enabled_tools'] = enabled_tools_by_type.get(scan_type, {})
valid_flows.append((scan_type, flow_func, flow_specific_kwargs))
else:
logger.warning(f"跳过未实现的 Flow: {scan_type}")
return valid_flows
# ---------------------------------------------------------
# 动态阶段执行(基于 FlowOrchestrator 定义)
# ---------------------------------------------------------
for mode, enabled_flows in orchestrator.get_execution_stages():
if mode == 'sequential':
# 顺序执行
logger.info(f"\n{'='*60}\n顺序执行阶段: {', '.join(enabled_flows)}\n{'='*60}")
for scan_type, flow_func, flow_specific_kwargs in get_valid_flows(enabled_flows):
logger.info(f"\n{'='*60}\n执行 Flow: {scan_type}\n{'='*60}")
try:
result = flow_func(**flow_specific_kwargs)
record_flow_result(scan_type, result=result)
except Exception as e:
record_flow_result(scan_type, error=e)
elif mode == 'parallel':
# 并行执行阶段:通过 Task 包装子 Flow并使用 Prefect TaskRunner 并发运行
logger.info(f"\n{'='*60}\n并行执行阶段: {', '.join(enabled_flows)}\n{'='*60}")
futures = []
# 提交所有并行子 Flow 任务
for scan_type, flow_func, flow_specific_kwargs in get_valid_flows(enabled_flows):
logger.info(f"\n{'='*60}\n提交并行子 Flow 任务: {scan_type}\n{'='*60}")
future = _run_subflow_task.submit(
scan_type=scan_type,
flow_func=flow_func,
flow_kwargs=flow_specific_kwargs,
)
futures.append((scan_type, future))
# 等待所有并行子 Flow 完成
if futures:
wait([f for _, f in futures])
# 检查结果(复用统一的结果处理逻辑)
for scan_type, future in futures:
try:
result = future.result()
record_flow_result(scan_type, result=result)
except Exception as e:
record_flow_result(scan_type, error=e)
# ==================== 完成 ====================
logger.info(
"="*60 + "\n" +
"✓ 扫描任务初始化完成\n" +
f" 执行的 Flow: {', '.join(executed_flows)}\n" +
"="*60
)
# ==================== 返回结果 ====================
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': str(scan_workspace_path),
'executed_flows': executed_flows,
'results': results
}
except ValueError as e:
# 参数错误
logger.error("参数错误: %s", e)
raise
except RuntimeError as e:
# 执行失败
logger.error("运行时错误: %s", e)
raise
except OSError as e:
# 文件系统错误(工作空间创建失败)
logger.error("文件系统错误: %s", e)
raise
except Exception as e:
# 其他未预期错误
logger.exception("初始化扫描任务失败: %s", e)
# 注意:失败状态更新由 Prefect State Handlers 自动处理
raise

View File

@@ -1,524 +0,0 @@
"""
端口扫描 Flow
负责编排端口扫描的完整流程
架构:
- Flow 负责编排多个原子 Task
- 支持串行执行扫描工具(流式处理)
- 每个 Task 可独立重试
- 配置由 YAML 解析
"""
# Django 环境初始化(导入即生效)
from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
import os
import subprocess
from pathlib import Path
from typing import Callable
from prefect import flow
from apps.scan.tasks.port_scan import (
export_scan_targets_task,
run_and_stream_save_ports_task
)
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command
logger = logging.getLogger(__name__)
def calculate_port_scan_timeout(
tool_config: dict,
file_path: str,
base_per_pair: float = 0.5
) -> int:
"""
根据目标数量和端口数量计算超时时间
计算公式:超时时间 = 目标数 × 端口数 × base_per_pair
超时范围60秒 ~ 2天172800秒
Args:
tool_config: 工具配置字典包含端口配置ports, top-ports等
file_path: 目标文件路径(域名/IP列表
base_per_pair: 每个"端口-目标对"的基础时间(秒),默认 0.5秒
Returns:
int: 计算出的超时时间范围60 ~ 172800
Example:
# 100个目标 × 100个端口 × 0.5秒 = 5000秒
# 10个目标 × 1000个端口 × 0.5秒 = 5000秒
timeout = calculate_port_scan_timeout(
tool_config={'top-ports': 100},
file_path='/path/to/domains.txt'
)
"""
try:
# 1. 统计目标数量
result = subprocess.run(
['wc', '-l', file_path],
capture_output=True,
text=True,
check=True
)
target_count = int(result.stdout.strip().split()[0])
# 2. 解析端口数量
port_count = _parse_port_count(tool_config)
# 3. 计算超时时间
# 总工作量 = 目标数 × 端口数
total_work = target_count * port_count
timeout = int(total_work * base_per_pair)
# 4. 设置合理的下限(不再设置上限)
min_timeout = 60 # 最小 60 秒
timeout = max(min_timeout, timeout)
logger.info(
f"计算端口扫描 timeout - "
f"目标数: {target_count}, "
f"端口数: {port_count}, "
f"总工作量: {total_work}, "
f"超时: {timeout}"
)
return timeout
except Exception as e:
logger.warning(f"计算 timeout 失败: {e},使用默认值 600秒")
return 600
def _parse_port_count(tool_config: dict) -> int:
"""
从工具配置中解析端口数量
优先级:
1. top-ports: N → 返回 N
2. ports: "80,443,8080" → 返回逗号分隔的数量
3. ports: "1-1000" → 返回范围的大小
4. ports: "1-65535" → 返回 65535
5. 默认 → 返回 100naabu 默认扫描 top 100
Args:
tool_config: 工具配置字典
Returns:
int: 端口数量
"""
# 1. 检查 top-ports 配置
if 'top-ports' in tool_config:
top_ports = tool_config['top-ports']
if isinstance(top_ports, int) and top_ports > 0:
return top_ports
logger.warning(f"top-ports 配置无效: {top_ports},使用默认值")
# 2. 检查 ports 配置
if 'ports' in tool_config:
ports_str = str(tool_config['ports']).strip()
# 2.1 逗号分隔的端口列表80,443,8080
if ',' in ports_str:
port_list = [p.strip() for p in ports_str.split(',') if p.strip()]
return len(port_list)
# 2.2 端口范围1-1000
if '-' in ports_str:
try:
start, end = ports_str.split('-', 1)
start_port = int(start.strip())
end_port = int(end.strip())
if 1 <= start_port <= end_port <= 65535:
return end_port - start_port + 1
logger.warning(f"端口范围无效: {ports_str},使用默认值")
except ValueError:
logger.warning(f"端口范围解析失败: {ports_str},使用默认值")
# 2.3 单个端口
try:
port = int(ports_str)
if 1 <= port <= 65535:
return 1
except ValueError:
logger.warning(f"端口配置解析失败: {ports_str},使用默认值")
# 3. 默认值naabu 默认扫描 top 100 端口
return 100
def _setup_port_scan_directory(scan_workspace_dir: str) -> Path:
"""
创建并验证端口扫描工作目录
Args:
scan_workspace_dir: 扫描工作空间目录
Returns:
Path: 端口扫描目录路径
Raises:
RuntimeError: 目录创建或验证失败
"""
port_scan_dir = Path(scan_workspace_dir) / 'port_scan'
port_scan_dir.mkdir(parents=True, exist_ok=True)
if not port_scan_dir.is_dir():
raise RuntimeError(f"端口扫描目录创建失败: {port_scan_dir}")
if not os.access(port_scan_dir, os.W_OK):
raise RuntimeError(f"端口扫描目录不可写: {port_scan_dir}")
return port_scan_dir
def _export_scan_targets(target_id: int, port_scan_dir: Path) -> tuple[str, int, str]:
"""
导出扫描目标到文件
根据 Target 类型自动决定导出内容:
- DOMAIN: 从 Subdomain 表导出子域名
- IP: 直接写入 target.name
- CIDR: 展开 CIDR 范围内的所有 IP
Args:
target_id: 目标 ID
port_scan_dir: 端口扫描目录
Returns:
tuple: (targets_file, target_count, target_type)
"""
logger.info("Step 1: 导出扫描目标列表")
targets_file = str(port_scan_dir / 'targets.txt')
export_result = export_scan_targets_task(
target_id=target_id,
output_file=targets_file,
batch_size=1000 # 每次读取 1000 条,优化内存占用
)
target_count = export_result['total_count']
target_type = export_result.get('target_type', 'unknown')
logger.info(
"✓ 扫描目标导出完成 - 类型: %s, 文件: %s, 数量: %d",
target_type,
export_result['output_file'],
target_count
)
if target_count == 0:
logger.warning("目标下没有可扫描的地址,无法执行端口扫描")
return export_result['output_file'], target_count, target_type
def _run_scans_sequentially(
enabled_tools: dict,
domains_file: str,
port_scan_dir: Path,
scan_id: int,
target_id: int,
target_name: str
) -> tuple[dict, int, list, list]:
"""
串行执行端口扫描任务
Args:
enabled_tools: 已启用的工具配置字典
domains_file: 域名文件路径
port_scan_dir: 端口扫描目录
scan_id: 扫描任务 ID
target_id: 目标 ID
target_name: 目标名称(用于错误日志)
Returns:
tuple: (tool_stats, processed_records, successful_tool_names, failed_tools)
注意:端口扫描是流式输出,不生成结果文件
Raises:
RuntimeError: 所有工具均失败
"""
# ==================== 构建命令并串行执行 ====================
tool_stats = {}
processed_records = 0
failed_tools = [] # 记录失败的工具(含原因)
# for循环执行工具按顺序串行运行每个启用的端口扫描工具
for tool_name, tool_config in enabled_tools.items():
# 1. 构建完整命令(变量替换)
try:
command = build_scan_command(
tool_name=tool_name,
scan_type='port_scan',
command_params={
'domains_file': domains_file # 对应 {domains_file}
},
tool_config=tool_config #yaml的工具配置
)
except Exception as e:
reason = f"命令构建失败: {str(e)}"
logger.error(f"构建 {tool_name} 命令失败: {e}")
failed_tools.append({'tool': tool_name, 'reason': reason})
continue
# 2. 获取超时时间(支持 'auto' 动态计算)
config_timeout = tool_config['timeout']
if config_timeout == 'auto':
# 动态计算超时时间
config_timeout = calculate_port_scan_timeout(
tool_config=tool_config,
file_path=str(domains_file)
)
logger.info(f"✓ 工具 {tool_name} 动态计算 timeout: {config_timeout}")
# 2.1 生成日志文件路径
from datetime import datetime
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
log_file = port_scan_dir / f"{tool_name}_{timestamp}.log"
# 3. 执行扫描任务
logger.info("开始执行 %s 扫描(超时: %d秒)...", tool_name, config_timeout)
try:
# 直接调用 task串行执行
# 注意:端口扫描是流式输出到 stdout不使用 output_file
result = run_and_stream_save_ports_task(
cmd=command,
tool_name=tool_name, # 工具名称
scan_id=scan_id,
target_id=target_id,
cwd=str(port_scan_dir),
shell=True,
batch_size=1000,
timeout=config_timeout,
log_file=str(log_file) # 新增:日志文件路径
)
tool_stats[tool_name] = {
'command': command,
'result': result,
'timeout': config_timeout
}
processed_records += result.get('processed_records', 0)
logger.info(
"✓ 工具 %s 流式处理完成 - 记录数: %d",
tool_name, result.get('processed_records', 0)
)
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
# 注意:流式处理任务超时时,已解析的数据已保存到数据库
reason = f"执行超时(配置: {config_timeout}秒)"
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.warning(
"⚠️ 工具 %s 执行超时 - 超时配置: %d\n"
"注意:超时前已解析的端口数据已保存到数据库,但扫描未完全完成。",
tool_name, config_timeout
)
except Exception as exc:
# 其他异常
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
if failed_tools:
logger.warning(
"以下扫描工具执行失败: %s",
', '.join([f['tool'] for f in failed_tools])
)
if not tool_stats:
error_details = "; ".join([f"{f['tool']}: {f['reason']}" for f in failed_tools])
logger.warning("所有端口扫描工具均失败 - 目标: %s, 失败工具: %s", target_name, error_details)
# 返回空结果,不抛出异常,让扫描继续
return {}, 0, [], failed_tools
# 动态计算成功的工具列表
successful_tool_names = [name for name in enabled_tools.keys()
if name not in [f['tool'] for f in failed_tools]]
logger.info(
"✓ 串行端口扫描执行完成 - 成功: %d/%d (成功: %s, 失败: %s)",
len(tool_stats), len(enabled_tools),
', '.join(successful_tool_names) if successful_tool_names else '',
', '.join([f['tool'] for f in failed_tools]) if failed_tools else ''
)
return tool_stats, processed_records, successful_tool_names, failed_tools
@flow(
name="port_scan",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def port_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: dict
) -> dict:
"""
端口扫描 Flow
主要功能:
1. 扫描目标域名/IP 的开放端口
2. 保存 host + ip + port 三元映射到 HostPortMapping 表
输出资产:
- HostPortMapping主机端口映射host + ip + port 三元组)
工作流程:
Step 0: 创建工作目录
Step 1: 导出域名列表到文件(供扫描工具使用)
Step 2: 解析配置,获取启用的工具
Step 3: 串行执行扫描工具,运行端口扫描工具并实时解析输出到数据库(→ HostPortMapping
Args:
scan_id: 扫描任务 ID
target_name: 域名
target_id: 目标 ID
scan_workspace_dir: Scan 工作空间目录
enabled_tools: 启用的工具配置字典
Returns:
dict: {
'success': bool,
'scan_id': int,
'target': str,
'scan_workspace_dir': str,
'domains_file': str,
'domain_count': int,
'processed_records': int,
'executed_tasks': list,
'tool_stats': {
'total': int, # 总工具数
'successful': int, # 成功工具数
'failed': int, # 失败工具数
'successful_tools': list[str], # 成功工具列表 ['naabu_active']
'failed_tools': list[dict], # 失败工具列表 [{'tool': 'naabu_passive', 'reason': '超时'}]
'details': dict # 详细执行结果(保留向后兼容)
}
}
Raises:
ValueError: 配置错误
RuntimeError: 执行失败
Note:
端口扫描工具(如 naabu会解析域名获取 IP输出 host + ip + port 三元组。
同一 host 可能对应多个 IPCDN、负载均衡因此使用三元映射表存储。
"""
try:
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
logger.info(
"="*60 + "\n" +
"开始端口扫描\n" +
f" Scan ID: {scan_id}\n" +
f" Target: {target_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
# Step 0: 创建工作目录
port_scan_dir = _setup_port_scan_directory(scan_workspace_dir)
# Step 1: 导出扫描目标列表到文件(根据 Target 类型自动决定内容)
targets_file, target_count, target_type = _export_scan_targets(target_id, port_scan_dir)
if target_count == 0:
logger.warning("目标下没有可扫描的地址,跳过端口扫描")
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'targets_file': targets_file,
'target_count': 0,
'target_type': target_type,
'processed_records': 0,
'executed_tasks': ['export_scan_targets'],
'tool_stats': {
'total': 0,
'successful': 0,
'failed': 0,
'successful_tools': [],
'failed_tools': [],
'details': {}
}
}
# Step 2: 工具配置信息
logger.info("Step 2: 工具配置信息")
logger.info(
"✓ 启用工具: %s",
', '.join(enabled_tools.keys())
)
# Step 3: 串行执行扫描工具
logger.info("Step 3: 串行执行扫描工具")
tool_stats, processed_records, successful_tool_names, failed_tools = _run_scans_sequentially(
enabled_tools=enabled_tools,
domains_file=targets_file, # 现在是 targets_file兼容原参数名
port_scan_dir=port_scan_dir,
scan_id=scan_id,
target_id=target_id,
target_name=target_name
)
logger.info("="*60 + "\n✓ 端口扫描完成\n" + "="*60)
# 动态生成已执行的任务列表
executed_tasks = ['export_scan_targets', 'parse_config']
executed_tasks.extend([f'run_and_stream_save_ports ({tool})' for tool in tool_stats.keys()])
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'targets_file': targets_file,
'target_count': target_count,
'target_type': target_type,
'processed_records': processed_records,
'executed_tasks': executed_tasks,
'tool_stats': {
'total': len(tool_stats) + len(failed_tools),
'successful': len(successful_tool_names),
'failed': len(failed_tools),
'successful_tools': successful_tool_names,
'failed_tools': failed_tools, # [{'tool': 'naabu_active', 'reason': '超时'}]
'details': tool_stats # 详细结果(保留向后兼容)
}
}
except ValueError as e:
logger.error("配置错误: %s", e)
raise
except RuntimeError as e:
logger.error("运行时错误: %s", e)
raise
except Exception as e:
logger.exception("端口扫描失败: %s", e)
raise

View File

@@ -1,499 +0,0 @@
"""
站点扫描 Flow
负责编排站点扫描的完整流程
架构:
- Flow 负责编排多个原子 Task
- 支持串行执行扫描工具(流式处理)
- 每个 Task 可独立重试
- 配置由 YAML 解析
"""
# Django 环境初始化(导入即生效)
from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
import os
import subprocess
from pathlib import Path
from typing import Callable
from prefect import flow
from apps.scan.tasks.site_scan import export_site_urls_task, run_and_stream_save_websites_task
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command
logger = logging.getLogger(__name__)
def calculate_timeout_by_line_count(
tool_config: dict,
file_path: str,
base_per_time: int = 1,
min_timeout: int = 60
) -> int:
"""
根据文件行数计算 timeout
使用 wc -l 统计文件行数,根据行数和每行基础时间计算 timeout
Args:
tool_config: 工具配置字典(此函数未使用,但保持接口一致性)
file_path: 要统计行数的文件路径
base_per_time: 每行的基础时间默认1秒
min_timeout: 最小超时时间默认60秒
Returns:
int: 计算出的超时时间(秒),不低于 min_timeout
Example:
timeout = calculate_timeout_by_line_count(
tool_config={},
file_path='/path/to/urls.txt',
base_per_time=2
)
"""
try:
# 使用 wc -l 快速统计行数
result = subprocess.run(
['wc', '-l', file_path],
capture_output=True,
text=True,
check=True
)
# wc -l 输出格式:行数 + 空格 + 文件名
line_count = int(result.stdout.strip().split()[0])
# 计算 timeout行数 × 每行基础时间,不低于最小值
timeout = max(line_count * base_per_time, min_timeout)
logger.info(
f"timeout 自动计算: 文件={file_path}, "
f"行数={line_count}, 每行时间={base_per_time}秒, 最小值={min_timeout}秒, timeout={timeout}"
)
return timeout
except Exception as e:
# 如果 wc -l 失败,使用默认值
logger.warning(f"wc -l 计算行数失败: {e},使用默认 timeout: {min_timeout}")
return min_timeout
def _setup_site_scan_directory(scan_workspace_dir: str) -> Path:
"""
创建并验证站点扫描工作目录
Args:
scan_workspace_dir: 扫描工作空间目录
Returns:
Path: 站点扫描目录路径
Raises:
RuntimeError: 目录创建或验证失败
"""
site_scan_dir = Path(scan_workspace_dir) / 'site_scan'
site_scan_dir.mkdir(parents=True, exist_ok=True)
if not site_scan_dir.is_dir():
raise RuntimeError(f"站点扫描目录创建失败: {site_scan_dir}")
if not os.access(site_scan_dir, os.W_OK):
raise RuntimeError(f"站点扫描目录不可写: {site_scan_dir}")
return site_scan_dir
def _export_site_urls(target_id: int, site_scan_dir: Path, target_name: str = None) -> tuple[str, int, int]:
"""
导出站点 URL 到文件
Args:
target_id: 目标 ID
site_scan_dir: 站点扫描目录
target_name: 目标名称(用于懒加载时写入默认值)
Returns:
tuple: (urls_file, total_urls, association_count)
Raises:
ValueError: URL 数量为 0
"""
logger.info("Step 1: 导出站点URL列表")
urls_file = str(site_scan_dir / 'site_urls.txt')
export_result = export_site_urls_task(
target_id=target_id,
output_file=urls_file,
target_name=target_name,
batch_size=1000 # 每次处理1000个子域名
)
total_urls = export_result['total_urls']
association_count = export_result['association_count'] # 主机端口关联数
logger.info(
"✓ 站点URL导出完成 - 文件: %s, URL数量: %d, 关联数: %d",
export_result['output_file'],
total_urls,
association_count
)
if total_urls == 0:
logger.warning("目标下没有可用的站点URL无法执行站点扫描")
# 不抛出异常,由上层决定如何处理
# raise ValueError("目标下没有可用的站点URL无法执行站点扫描")
return export_result['output_file'], total_urls, association_count
def _run_scans_sequentially(
enabled_tools: dict,
urls_file: str,
total_urls: int,
site_scan_dir: Path,
scan_id: int,
target_id: int,
target_name: str
) -> tuple[dict, int, list, list]:
"""
串行执行站点扫描任务
Args:
enabled_tools: 已启用的工具配置字典
urls_file: URL 文件路径
total_urls: URL 总数
site_scan_dir: 站点扫描目录
scan_id: 扫描任务 ID
target_id: 目标 ID
target_name: 目标名称(用于错误日志)
Returns:
tuple: (tool_stats, processed_records, successful_tool_names, failed_tools)
Raises:
RuntimeError: 所有工具均失败
"""
tool_stats = {}
processed_records = 0
failed_tools = []
for tool_name, tool_config in enabled_tools.items():
# 1. 构建完整命令(变量替换)
try:
command = build_scan_command(
tool_name=tool_name,
scan_type='site_scan',
command_params={
'url_file': urls_file
},
tool_config=tool_config
)
except Exception as e:
reason = f"命令构建失败: {str(e)}"
logger.error(f"构建 {tool_name} 命令失败: {e}")
failed_tools.append({'tool': tool_name, 'reason': reason})
continue
# 2. 获取超时时间(支持 'auto' 动态计算)
config_timeout = tool_config.get('timeout', 300)
if config_timeout == 'auto':
# 动态计算超时时间
timeout = calculate_timeout_by_line_count(tool_config, urls_file, base_per_time=1)
logger.info(f"✓ 工具 {tool_name} 动态计算 timeout: {timeout}")
else:
# 使用配置的超时时间和动态计算的较大值
dynamic_timeout = calculate_timeout_by_line_count(tool_config, urls_file, base_per_time=1)
timeout = max(dynamic_timeout, config_timeout)
# 2.1 生成日志文件路径(类似端口扫描)
from datetime import datetime
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
log_file = site_scan_dir / f"{tool_name}_{timestamp}.log"
logger.info(
"开始执行 %s 站点扫描 - URL数: %d, 最终超时: %ds",
tool_name, total_urls, timeout
)
# 3. 执行扫描任务
try:
# 流式执行扫描并实时保存结果
result = run_and_stream_save_websites_task(
cmd=command,
tool_name=tool_name, # 新增:工具名称
scan_id=scan_id,
target_id=target_id,
cwd=str(site_scan_dir),
shell=True,
batch_size=1000,
timeout=timeout,
log_file=str(log_file) # 新增:日志文件路径
)
tool_stats[tool_name] = {
'command': command,
'result': result,
'timeout': timeout
}
processed_records += result.get('processed_records', 0)
logger.info(
"✓ 工具 %s 流式处理完成 - 处理记录: %d, 创建站点: %d, 跳过: %d",
tool_name,
result.get('processed_records', 0),
result.get('created_websites', 0),
result.get('skipped_no_subdomain', 0) + result.get('skipped_failed', 0)
)
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
reason = f"执行超时(配置: {timeout}秒)"
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.warning(
"⚠️ 工具 %s 执行超时 - 超时配置: %d\n"
"注意:超时前已解析的站点数据已保存到数据库,但扫描未完全完成。",
tool_name, timeout
)
except Exception as exc:
# 其他异常
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
if failed_tools:
logger.warning(
"以下扫描工具执行失败: %s",
', '.join([f['tool'] for f in failed_tools])
)
if not tool_stats:
error_details = "; ".join([f"{f['tool']}: {f['reason']}" for f in failed_tools])
logger.warning("所有站点扫描工具均失败 - 目标: %s, 失败工具: %s", target_name, error_details)
# 返回空结果,不抛出异常,让扫描继续
return {}, 0, [], failed_tools
# 动态计算成功的工具列表
successful_tool_names = [name for name in enabled_tools.keys()
if name not in [f['tool'] for f in failed_tools]]
logger.info(
"✓ 串行站点扫描执行完成 - 成功: %d/%d (成功: %s, 失败: %s)",
len(tool_stats), len(enabled_tools),
', '.join(successful_tool_names) if successful_tool_names else '',
', '.join([f['tool'] for f in failed_tools]) if failed_tools else ''
)
return tool_stats, processed_records, successful_tool_names, failed_tools
def calculate_timeout(url_count: int, base: int = 600, per_url: int = 1) -> int:
"""
根据 URL 数量动态计算扫描超时时间
规则:
- 基础时间:默认 600 秒10 分钟)
- 每个 URL 额外增加:默认 1 秒
Args:
url_count: URL 数量,必须为正整数
base: 基础超时时间(秒),默认 600
per_url: 每个 URL 增加的时间(秒),默认 1
Returns:
int: 计算得到的超时时间(秒),不超过 max_timeout
Raises:
ValueError: 当 url_count 为负数或 0 时抛出异常
"""
if url_count < 0:
raise ValueError(f"URL数量不能为负数: {url_count}")
if url_count == 0:
raise ValueError("URL数量不能为0")
timeout = base + int(url_count * per_url)
# 不设置上限,由调用方根据需要控制
return timeout
@flow(
name="site_scan",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def site_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: dict
) -> dict:
"""
站点扫描 Flow
主要功能:
1. 从target获取所有子域名与其对应的端口号拼接成URL写入文件
2. 用httpx进行批量请求并实时保存到数据库流式处理
工作流程:
Step 0: 创建工作目录
Step 1: 导出站点 URL 列表
Step 2: 解析配置,获取启用的工具
Step 3: 串行执行扫描工具并实时保存结果
Args:
scan_id: 扫描任务 ID
target_name: 目标名称
target_id: 目标 ID
scan_workspace_dir: 扫描工作空间目录
enabled_tools: 启用的工具配置字典
Returns:
dict: {
'success': bool,
'scan_id': int,
'target': str,
'scan_workspace_dir': str,
'urls_file': str,
'total_urls': int,
'association_count': int,
'processed_records': int,
'created_websites': int,
'skipped_no_subdomain': int,
'skipped_failed': int,
'executed_tasks': list,
'tool_stats': {
'total': int,
'successful': int,
'failed': int,
'successful_tools': list[str],
'failed_tools': list[dict]
}
}
Raises:
ValueError: 配置错误
RuntimeError: 执行失败
"""
try:
logger.info(
"="*60 + "\n" +
"开始站点扫描\n" +
f" Scan ID: {scan_id}\n" +
f" Target: {target_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
# Step 0: 创建工作目录
site_scan_dir = _setup_site_scan_directory(scan_workspace_dir)
# Step 1: 导出站点 URL
urls_file, total_urls, association_count = _export_site_urls(
target_id, site_scan_dir, target_name
)
if total_urls == 0:
logger.warning("目标下没有可用的站点URL跳过站点扫描")
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'urls_file': urls_file,
'total_urls': 0,
'association_count': association_count,
'processed_records': 0,
'created_websites': 0,
'skipped_no_subdomain': 0,
'skipped_failed': 0,
'executed_tasks': ['export_site_urls'],
'tool_stats': {
'total': 0,
'successful': 0,
'failed': 0,
'successful_tools': [],
'failed_tools': [],
'details': {}
}
}
# Step 2: 工具配置信息
logger.info("Step 2: 工具配置信息")
logger.info(
"✓ 启用工具: %s",
', '.join(enabled_tools.keys())
)
# Step 3: 串行执行扫描工具
logger.info("Step 3: 串行执行扫描工具并实时保存结果")
tool_stats, processed_records, successful_tool_names, failed_tools = _run_scans_sequentially(
enabled_tools=enabled_tools,
urls_file=urls_file,
total_urls=total_urls,
site_scan_dir=site_scan_dir,
scan_id=scan_id,
target_id=target_id,
target_name=target_name
)
logger.info("="*60 + "\n✓ 站点扫描完成\n" + "="*60)
# 动态生成已执行的任务列表
executed_tasks = ['export_site_urls', 'parse_config']
executed_tasks.extend([f'run_and_stream_save_websites ({tool})' for tool in tool_stats.keys()])
# 汇总所有工具的结果
total_created = sum(stats['result'].get('created_websites', 0) for stats in tool_stats.values())
total_skipped_no_subdomain = sum(stats['result'].get('skipped_no_subdomain', 0) for stats in tool_stats.values())
total_skipped_failed = sum(stats['result'].get('skipped_failed', 0) for stats in tool_stats.values())
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'urls_file': urls_file,
'total_urls': total_urls,
'association_count': association_count,
'processed_records': processed_records,
'created_websites': total_created,
'skipped_no_subdomain': total_skipped_no_subdomain,
'skipped_failed': total_skipped_failed,
'executed_tasks': executed_tasks,
'tool_stats': {
'total': len(enabled_tools),
'successful': len(successful_tool_names),
'failed': len(failed_tools),
'successful_tools': successful_tool_names,
'failed_tools': failed_tools,
'details': tool_stats
}
}
except ValueError as e:
logger.error("配置错误: %s", e)
raise
except RuntimeError as e:
logger.error("运行时错误: %s", e)
raise
except Exception as e:
logger.exception("站点扫描失败: %s", e)
raise

View File

@@ -1,769 +0,0 @@
"""
子域名发现扫描 Flow
负责编排子域名发现扫描的完整流程
架构:
- Flow 负责编排多个原子 Task
- 支持并行执行扫描工具
- 每个 Task 可独立重试
- 配置由 YAML 解析
增强流程4 阶段):
Stage 1: 被动收集(并行) - 必选
Stage 2: 字典爆破(可选) - 子域名字典爆破
Stage 3: 变异生成 + 验证(可选) - dnsgen + 通用存活验证
Stage 4: DNS 存活验证(可选) - 通用存活验证
各阶段可灵活开关,最终结果根据实际执行的阶段动态决定
"""
# Django 环境初始化(导入即生效)
from apps.common.prefect_django_setup import setup_django_for_prefect
from prefect import flow
from pathlib import Path
import logging
import os
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_wordlist_local
from apps.engine.services.wordlist_service import WordlistService
from apps.common.normalizer import normalize_domain
from apps.common.validators import validate_domain
from datetime import datetime
import uuid
import subprocess
logger = logging.getLogger(__name__)
def _setup_subdomain_directory(scan_workspace_dir: str) -> Path:
"""
创建并验证子域名扫描工作目录
Args:
scan_workspace_dir: 扫描工作空间目录
Returns:
Path: 子域名扫描目录路径
Raises:
RuntimeError: 目录创建或验证失败
"""
result_dir = Path(scan_workspace_dir) / 'subdomain_discovery'
result_dir.mkdir(parents=True, exist_ok=True)
if not result_dir.is_dir():
raise RuntimeError(f"子域名扫描目录创建失败: {result_dir}")
if not os.access(result_dir, os.W_OK):
raise RuntimeError(f"子域名扫描目录不可写: {result_dir}")
return result_dir
def _validate_and_normalize_target(target_name: str) -> str:
"""
验证并规范化目标域名
Args:
target_name: 原始目标域名
Returns:
str: 规范化后的域名
Raises:
ValueError: 域名无效时抛出异常
Example:
>>> _validate_and_normalize_target('EXAMPLE.COM')
'example.com'
>>> _validate_and_normalize_target('http://example.com')
'example.com'
"""
try:
normalized_target = normalize_domain(target_name)
validate_domain(normalized_target)
logger.debug("域名验证通过: %s -> %s", target_name, normalized_target)
return normalized_target
except ValueError as e:
error_msg = f"无效的目标域名: {target_name} - {e}"
logger.error(error_msg)
raise ValueError(error_msg) from e
def _run_scans_parallel(
enabled_tools: dict,
domain_name: str,
result_dir: Path
) -> tuple[list, list, list]:
"""
并行运行所有启用的子域名扫描工具
Args:
enabled_tools: 启用的工具配置字典 {'tool_name': {'timeout': 600, ...}}
domain_name: 目标域名
result_dir: 结果输出目录
Returns:
tuple: (result_files, failed_tools, successful_tool_names)
Raises:
RuntimeError: 所有工具均失败
"""
# 导入任务函数
from apps.scan.tasks.subdomain_discovery import run_subdomain_discovery_task
# 生成时间戳(所有工具共用)
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
# TODO: 接入代理池管理系统
# from apps.proxy.services import proxy_pool
# proxy_stats = proxy_pool.get_stats()
# logger.info(f"代理池状态: {proxy_stats['healthy']}/{proxy_stats['total']} 可用")
failures = [] # 记录命令构建失败的工具
futures = {}
# 1. 构建命令并提交并行任务
for tool_name, tool_config in enabled_tools.items():
# 1.1 生成唯一的输出文件路径(绝对路径)
short_uuid = uuid.uuid4().hex[:4]
output_file = str(result_dir / f"{tool_name}_{timestamp}_{short_uuid}.txt")
# 1.2 构建完整命令(变量替换)
try:
command = build_scan_command(
tool_name=tool_name,
scan_type='subdomain_discovery',
command_params={
'domain': domain_name, # 对应 {domain}
'output_file': output_file # 对应 {output_file}
},
tool_config=tool_config
)
except Exception as e:
failure_msg = f"{tool_name}: 命令构建失败 - {e}"
failures.append(failure_msg)
logger.error(f"构建 {tool_name} 命令失败: {e}")
continue
# 1.3 获取超时时间(支持 'auto' 动态计算)
timeout = tool_config['timeout']
if timeout == 'auto':
# 子域名发现工具通常运行时间较长,使用默认值 600 秒
timeout = 600
logger.info(f"✓ 工具 {tool_name} 使用默认 timeout: {timeout}")
# 1.4 提交任务
logger.debug(
f"提交任务 - 工具: {tool_name}, 超时: {timeout}s, 输出: {output_file}"
)
future = run_subdomain_discovery_task.submit(
tool=tool_name,
command=command,
timeout=timeout,
output_file=output_file
)
futures[tool_name] = future
# 2. 检查是否有任何工具成功提交
if not futures:
logger.warning(
"所有扫描工具均无法启动 - 目标: %s, 失败详情: %s",
domain_name, "; ".join(failures)
)
# 返回空结果,不抛出异常,让扫描继续
return [], [{'tool': 'all', 'reason': '所有工具均无法启动'}], []
# 3. 等待并行任务完成,获取结果
result_files = []
failed_tools = []
for tool_name, future in futures.items():
try:
result = future.result() # 返回文件路径(字符串)或 ""(失败)
if result:
result_files.append(result)
logger.info("✓ 扫描工具 %s 执行成功: %s", tool_name, result)
else:
failure_msg = f"{tool_name}: 未生成结果文件"
failures.append(failure_msg)
failed_tools.append({'tool': tool_name, 'reason': '未生成结果文件'})
logger.warning("⚠️ 扫描工具 %s 未生成结果文件", tool_name)
except Exception as e:
failure_msg = f"{tool_name}: {str(e)}"
failures.append(failure_msg)
failed_tools.append({'tool': tool_name, 'reason': str(e)})
logger.warning("⚠️ 扫描工具 %s 执行失败: %s", tool_name, str(e))
# 4. 检查是否有成功的工具
if not result_files:
logger.warning(
"所有扫描工具均失败 - 目标: %s, 失败详情: %s",
domain_name, "; ".join(failures)
)
# 返回空结果,不抛出异常,让扫描继续
return [], failed_tools, []
# 5. 动态计算成功的工具列表
successful_tool_names = [name for name in futures.keys()
if name not in [f['tool'] for f in failed_tools]]
logger.info(
"✓ 扫描工具并行执行完成 - 成功: %d/%d (成功: %s, 失败: %s)",
len(result_files), len(futures),
', '.join(successful_tool_names) if successful_tool_names else '',
', '.join([f['tool'] for f in failed_tools]) if failed_tools else ''
)
return result_files, failed_tools, successful_tool_names
def _run_single_tool(
tool_name: str,
tool_config: dict,
command_params: dict,
result_dir: Path,
scan_type: str = 'subdomain_discovery'
) -> str:
"""
运行单个扫描工具
Args:
tool_name: 工具名称
tool_config: 工具配置
command_params: 命令参数
result_dir: 结果目录
scan_type: 扫描类型
Returns:
str: 输出文件路径,失败返回空字符串
"""
from apps.scan.tasks.subdomain_discovery import run_subdomain_discovery_task
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
short_uuid = uuid.uuid4().hex[:4]
output_file = str(result_dir / f"{tool_name}_{timestamp}_{short_uuid}.txt")
# 添加 output_file 到参数
command_params['output_file'] = output_file
try:
command = build_scan_command(
tool_name=tool_name,
scan_type=scan_type,
command_params=command_params,
tool_config=tool_config
)
except Exception as e:
logger.error(f"构建 {tool_name} 命令失败: {e}")
return ""
timeout = tool_config.get('timeout', 3600)
if timeout == 'auto':
timeout = 3600
logger.info(f"执行 {tool_name}: timeout={timeout}s")
try:
result = run_subdomain_discovery_task(
tool=tool_name,
command=command,
timeout=timeout,
output_file=output_file
)
return result if result else ""
except Exception as e:
logger.warning(f"{tool_name} 执行失败: {e}")
return ""
def _count_lines(file_path: str) -> int:
"""
统计文件非空行数
Args:
file_path: 文件路径
Returns:
int: 非空行数量
"""
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
return sum(1 for line in f if line.strip())
except Exception as e:
logger.warning(f"统计文件行数失败: {file_path} - {e}")
return 0
def _merge_files(file_list: list, output_file: str) -> str:
"""
合并多个文件并去重
Args:
file_list: 文件路径列表
output_file: 输出文件路径
Returns:
str: 输出文件路径
"""
domains = set()
for f in file_list:
if f and Path(f).exists():
with open(f, 'r', encoding='utf-8', errors='ignore') as fp:
for line in fp:
line = line.strip()
if line:
domains.add(line)
with open(output_file, 'w', encoding='utf-8') as fp:
for domain in sorted(domains):
fp.write(domain + '\n')
logger.info(f"合并完成: {len(domains)} 个域名 -> {output_file}")
return output_file
@flow(
name="subdomain_discovery",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def subdomain_discovery_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: dict
) -> dict:
"""子域名发现扫描流程
工作流程4 阶段):
Stage 1: 被动收集(并行) - 必选
Stage 2: 字典爆破(可选) - 子域名字典爆破
Stage 3: 变异生成 + 验证(可选) - dnsgen + 通用存活验证
Stage 4: DNS 存活验证(可选) - 通用存活验证
Final: 保存到数据库
注意:
- 子域名发现只对 DOMAIN 类型目标有意义
- IP 和 CIDR 类型目标会自动跳过
Args:
scan_id: 扫描任务 ID
target_name: 目标名称(域名)
target_id: 目标 ID
scan_workspace_dir: Scan 工作空间目录(由 Service 层创建)
enabled_tools: 扫描配置字典:
{
'passive_tools': {...},
'bruteforce': {...},
'permutation': {...},
'resolve': {...}
}
Returns:
dict: 扫描结果
Raises:
ValueError: 配置错误
RuntimeError: 执行失败
"""
try:
# ==================== 参数验证 ====================
if scan_id is None:
raise ValueError("scan_id 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if enabled_tools is None:
raise ValueError("enabled_tools 不能为空")
scan_config = enabled_tools
# 如果未提供目标域名,跳过扫描
if not target_name:
logger.warning("未提供目标域名,跳过子域名发现扫描")
return _empty_result(scan_id, '', scan_workspace_dir)
# ==================== 检查 Target 类型 ====================
# 子域名发现只对 DOMAIN 类型有意义IP 和 CIDR 类型跳过
from apps.targets.services import TargetService
from apps.targets.models import Target
target_service = TargetService()
target = target_service.get_target(target_id)
if target and target.type != Target.TargetType.DOMAIN:
logger.info(
"跳过子域名发现扫描: Target 类型为 %s (ID=%d, Name=%s),子域名发现仅适用于域名类型",
target.type, target_id, target_name
)
return _empty_result(scan_id, target_name, scan_workspace_dir)
# 导入任务函数
from apps.scan.tasks.subdomain_discovery import (
run_subdomain_discovery_task,
merge_and_validate_task,
save_domains_task
)
# Step 0: 准备工作
result_dir = _setup_subdomain_directory(scan_workspace_dir)
# 验证并规范化目标域名
try:
domain_name = _validate_and_normalize_target(target_name)
except ValueError as e:
logger.warning("目标域名无效,跳过子域名发现扫描: %s", e)
return _empty_result(scan_id, target_name, scan_workspace_dir)
# 验证成功后打印日志
logger.info(
"="*60 + "\n" +
"开始子域名发现扫描\n" +
f" Scan ID: {scan_id}\n" +
f" Domain: {domain_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
# 解析配置
passive_tools = scan_config.get('passive_tools', {})
bruteforce_config = scan_config.get('bruteforce', {})
permutation_config = scan_config.get('permutation', {})
resolve_config = scan_config.get('resolve', {})
# 过滤出启用的被动工具
enabled_passive_tools = {
k: v for k, v in passive_tools.items()
if v.get('enabled', True)
}
executed_tasks = []
all_result_files = []
failed_tools = []
successful_tool_names = []
# ==================== Stage 1: 被动收集(并行)====================
logger.info("=" * 40)
logger.info("Stage 1: 被动收集(并行)")
logger.info("=" * 40)
if enabled_passive_tools:
logger.info("启用工具: %s", ', '.join(enabled_passive_tools.keys()))
result_files, stage1_failed, stage1_success = _run_scans_parallel(
enabled_tools=enabled_passive_tools,
domain_name=domain_name,
result_dir=result_dir
)
all_result_files.extend(result_files)
failed_tools.extend(stage1_failed)
successful_tool_names.extend(stage1_success)
executed_tasks.extend([f'passive ({tool})' for tool in stage1_success])
else:
logger.warning("未启用任何被动收集工具")
# 合并 Stage 1 结果
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
current_result = str(result_dir / f"subs_passive_{timestamp}.txt")
if all_result_files:
current_result = _merge_files(all_result_files, current_result)
executed_tasks.append('merge_passive')
else:
# 创建空文件
Path(current_result).touch()
logger.warning("Stage 1 无结果,创建空文件")
# ==================== Stage 2: 字典爆破(可选)====================
bruteforce_enabled = bruteforce_config.get('enabled', False)
if bruteforce_enabled:
logger.info("=" * 40)
logger.info("Stage 2: 字典爆破")
logger.info("=" * 40)
bruteforce_tool_config = bruteforce_config.get('subdomain_bruteforce', {})
wordlist_name = bruteforce_tool_config.get('wordlist_name', 'dns_wordlist.txt')
try:
# 确保本地存在字典文件(含 hash 校验)
local_wordlist_path = ensure_wordlist_local(wordlist_name)
# 获取字典记录用于计算 timeout
wordlist_service = WordlistService()
wordlist = wordlist_service.get_wordlist_by_name(wordlist_name)
timeout_value = bruteforce_tool_config.get('timeout', 3600)
if timeout_value == 'auto' and wordlist:
line_count = getattr(wordlist, 'line_count', None)
if line_count is None:
try:
with open(local_wordlist_path, 'rb') as f:
line_count = sum(1 for _ in f)
except OSError:
line_count = 0
try:
line_count_int = int(line_count)
except (TypeError, ValueError):
line_count_int = 0
timeout_value = line_count_int * 3 if line_count_int > 0 else 3600
bruteforce_tool_config = {
**bruteforce_tool_config,
'timeout': timeout_value,
}
logger.info(
"subdomain_bruteforce 使用自动 timeout: %s 秒 (字典行数=%s, 3秒/行)",
timeout_value,
line_count_int,
)
brute_output = str(result_dir / f"subs_brute_{timestamp}.txt")
brute_result = _run_single_tool(
tool_name='subdomain_bruteforce',
tool_config=bruteforce_tool_config,
command_params={
'domain': domain_name,
'wordlist': local_wordlist_path,
'output_file': brute_output
},
result_dir=result_dir
)
if brute_result:
# 合并 Stage 1 + Stage 2
current_result = _merge_files(
[current_result, brute_result],
str(result_dir / f"subs_merged_{timestamp}.txt")
)
successful_tool_names.append('subdomain_bruteforce')
executed_tasks.append('bruteforce')
else:
failed_tools.append({'tool': 'subdomain_bruteforce', 'reason': '执行失败'})
except Exception as exc:
logger.warning("字典准备失败,跳过字典爆破: %s", exc)
failed_tools.append({'tool': 'subdomain_bruteforce', 'reason': str(exc)})
# ==================== Stage 3: 变异生成 + 验证(可选)====================
permutation_enabled = permutation_config.get('enabled', False)
if permutation_enabled:
logger.info("=" * 40)
logger.info("Stage 3: 变异生成 + 存活验证(流式管道)")
logger.info("=" * 40)
permutation_tool_config = permutation_config.get('subdomain_permutation_resolve', {})
# === Step 3.1: 泛解析采样检测 ===
# 生成原文件 100 倍的变异样本,检查解析结果是否超过 50 倍
before_count = _count_lines(current_result)
# 配置参数
SAMPLE_MULTIPLIER = 100 # 采样数量 = 原文件 × 100
EXPANSION_THRESHOLD = 50 # 膨胀阈值 = 原文件 × 50
SAMPLE_TIMEOUT = 7200 # 采样超时 2 小时
sample_size = before_count * SAMPLE_MULTIPLIER
max_allowed = before_count * EXPANSION_THRESHOLD
sample_output = str(result_dir / f"subs_permuted_sample_{timestamp}.txt")
sample_cmd = (
f"cat {current_result} | dnsgen - | head -n {sample_size} | "
f"puredns resolve -r /app/backend/resources/resolvers.txt "
f"--write {sample_output} --wildcard-tests 50 --wildcard-batch 1000000 --quiet"
)
logger.info(
f"泛解析采样检测: 原文件 {before_count} 个, "
f"采样 {sample_size} 个, 阈值 {max_allowed}"
)
try:
subprocess.run(
sample_cmd,
shell=True,
timeout=SAMPLE_TIMEOUT,
check=False,
capture_output=True
)
sample_result_count = _count_lines(sample_output) if Path(sample_output).exists() else 0
logger.info(
f"采样结果: {sample_result_count} 个域名存活 "
f"(原文件: {before_count}, 阈值: {max_allowed})"
)
if sample_result_count > max_allowed:
# 采样结果超过阈值,说明存在泛解析,跳过完整变异
ratio = sample_result_count / before_count if before_count > 0 else sample_result_count
logger.warning(
f"跳过变异: 采样检测到泛解析 "
f"({sample_result_count} > {max_allowed}, 膨胀率 {ratio:.1f}x)"
)
failed_tools.append({
'tool': 'subdomain_permutation_resolve',
'reason': f"采样检测到泛解析 (膨胀率 {ratio:.1f}x)"
})
else:
# === Step 3.2: 采样通过,执行完整变异 ===
logger.info("采样检测通过,执行完整变异...")
permuted_output = str(result_dir / f"subs_permuted_{timestamp}.txt")
permuted_result = _run_single_tool(
tool_name='subdomain_permutation_resolve',
tool_config=permutation_tool_config,
command_params={
'input_file': current_result,
'output_file': permuted_output,
},
result_dir=result_dir
)
if permuted_result:
# 合并原结果 + 变异验证结果
current_result = _merge_files(
[current_result, permuted_result],
str(result_dir / f"subs_with_permuted_{timestamp}.txt")
)
successful_tool_names.append('subdomain_permutation_resolve')
executed_tasks.append('permutation')
else:
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': '执行失败'})
except subprocess.TimeoutExpired:
logger.warning(f"采样检测超时 ({SAMPLE_TIMEOUT}秒),跳过变异")
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': '采样检测超时'})
except Exception as e:
logger.warning(f"采样检测失败: {e},跳过变异")
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': f'采样检测失败: {e}'})
# ==================== Stage 4: DNS 存活验证(可选)====================
# 无论是否启用 Stage 3只要 resolve.enabled 为 true 就会执行,对当前所有候选子域做统一 DNS 验证
resolve_enabled = resolve_config.get('enabled', False)
if resolve_enabled:
logger.info("=" * 40)
logger.info("Stage 4: DNS 存活验证")
logger.info("=" * 40)
resolve_tool_config = resolve_config.get('subdomain_resolve', {})
# 根据当前候选子域数量动态计算 timeout支持 timeout: auto
timeout_value = resolve_tool_config.get('timeout', 3600)
if timeout_value == 'auto':
line_count = 0
try:
with open(current_result, 'rb') as f:
line_count = sum(1 for _ in f)
except OSError:
line_count = 0
try:
line_count_int = int(line_count)
except (TypeError, ValueError):
line_count_int = 0
timeout_value = line_count_int * 3 if line_count_int > 0 else 3600
resolve_tool_config = {
**resolve_tool_config,
'timeout': timeout_value,
}
logger.info(
"subdomain_resolve 使用自动 timeout: %s 秒 (候选子域数=%s, 3秒/域名)",
timeout_value,
line_count_int,
)
alive_output = str(result_dir / f"subs_alive_{timestamp}.txt")
alive_result = _run_single_tool(
tool_name='subdomain_resolve',
tool_config=resolve_tool_config,
command_params={
'input_file': current_result,
'output_file': alive_output,
},
result_dir=result_dir
)
if alive_result:
current_result = alive_result
successful_tool_names.append('subdomain_resolve')
executed_tasks.append('resolve')
else:
failed_tools.append({'tool': 'subdomain_resolve', 'reason': '执行失败'})
# ==================== Final: 保存到数据库 ====================
logger.info("=" * 40)
logger.info("Final: 保存到数据库")
logger.info("=" * 40)
# 最终验证和保存
final_file = merge_and_validate_task(
result_files=[current_result],
result_dir=str(result_dir)
)
save_result = save_domains_task(
domains_file=final_file,
scan_id=scan_id,
target_id=target_id
)
processed_domains = save_result.get('processed_records', 0)
executed_tasks.append('save_domains')
logger.info("="*60 + "\n✓ 子域名发现扫描完成\n" + "="*60)
return {
'success': True,
'scan_id': scan_id,
'target': domain_name,
'scan_workspace_dir': scan_workspace_dir,
'total': processed_domains,
'executed_tasks': executed_tasks,
'tool_stats': {
'total': len(enabled_passive_tools) + (1 if bruteforce_enabled else 0) +
(1 if permutation_enabled else 0) + (1 if resolve_enabled else 0),
'successful': len(successful_tool_names),
'failed': len(failed_tools),
'successful_tools': successful_tool_names,
'failed_tools': failed_tools
}
}
except ValueError as e:
logger.error("配置错误: %s", e)
raise
except RuntimeError as e:
logger.error("运行时错误: %s", e)
raise
except Exception as e:
logger.exception("子域名发现扫描失败: %s", e)
raise
def _empty_result(scan_id: int, target: str, scan_workspace_dir: str) -> dict:
"""返回空结果"""
return {
'success': True,
'scan_id': scan_id,
'target': target,
'scan_workspace_dir': scan_workspace_dir,
'total': 0,
'executed_tasks': [],
'tool_stats': {
'total': 0,
'successful': 0,
'failed': 0,
'successful_tools': [],
'failed_tools': []
}
}

View File

@@ -1,241 +0,0 @@
from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
from datetime import datetime
from pathlib import Path
from typing import Dict
from prefect import flow
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_nuclei_templates_local
from apps.scan.tasks.vuln_scan import (
export_endpoints_task,
run_vuln_tool_task,
run_and_stream_save_dalfox_vulns_task,
run_and_stream_save_nuclei_vulns_task,
)
from .utils import calculate_timeout_by_line_count
logger = logging.getLogger(__name__)
def _setup_vuln_scan_directory(scan_workspace_dir: str) -> Path:
vuln_scan_dir = Path(scan_workspace_dir) / "vuln_scan"
vuln_scan_dir.mkdir(parents=True, exist_ok=True)
return vuln_scan_dir
@flow(
name="endpoints_vuln_scan_flow",
log_prints=True,
)
def endpoints_vuln_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: Dict[str, dict],
) -> dict:
"""基于 Endpoint 的漏洞扫描 Flow串行执行 Dalfox 等工具)。"""
try:
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
vuln_scan_dir = _setup_vuln_scan_directory(scan_workspace_dir)
endpoints_file = vuln_scan_dir / "input_endpoints.txt"
# Step 1: 导出 Endpoint URL
export_result = export_endpoints_task(
target_id=target_id,
output_file=str(endpoints_file),
target_name=target_name, # 传入 target_name 用于生成默认端点
)
total_endpoints = export_result.get("total_count", 0)
if total_endpoints == 0 or not endpoints_file.exists() or endpoints_file.stat().st_size == 0:
logger.warning("目标下没有可用 Endpoint跳过漏洞扫描")
return {
"success": True,
"scan_id": scan_id,
"target": target_name,
"scan_workspace_dir": scan_workspace_dir,
"endpoints_file": str(endpoints_file),
"endpoint_count": 0,
"executed_tools": [],
"tool_results": {},
}
logger.info("Endpoint 导出完成,共 %d 条,开始执行漏洞扫描", total_endpoints)
tool_results: Dict[str, dict] = {}
# Step 2: 并行执行每个漏洞扫描工具(目前主要是 Dalfox
# 1先为每个工具 submit Prefect Task让 Worker 并行调度
# 2再统一收集各自的结果组装成 tool_results
tool_futures: Dict[str, dict] = {}
for tool_name, tool_config in enabled_tools.items():
# Nuclei 需要先确保本地模板存在(支持多个模板仓库)
template_args = ""
if tool_name == "nuclei":
repo_names = tool_config.get("template_repo_names")
if not repo_names or not isinstance(repo_names, (list, tuple)):
logger.error("Nuclei 配置缺少 template_repo_names数组跳过")
continue
template_paths = []
try:
for repo_name in repo_names:
path = ensure_nuclei_templates_local(repo_name)
template_paths.append(path)
logger.info("Nuclei 模板路径 [%s]: %s", repo_name, path)
except Exception as e:
logger.error("获取 Nuclei 模板失败: %s,跳过 nuclei 扫描", e)
continue
template_args = " ".join(f"-t {p}" for p in template_paths)
# 构建命令参数
command_params = {"endpoints_file": str(endpoints_file)}
if template_args:
command_params["template_args"] = template_args
command = build_scan_command(
tool_name=tool_name,
scan_type="vuln_scan",
command_params=command_params,
tool_config=tool_config,
)
raw_timeout = tool_config.get("timeout", 600)
if isinstance(raw_timeout, str) and raw_timeout == "auto":
# timeout=auto 时,根据 endpoints_file 行数自动计算超时时间
# Dalfox: 每行 100 秒Nuclei: 每行 30 秒
base_per_time = 30 if tool_name == "nuclei" else 100
timeout = calculate_timeout_by_line_count(
tool_config=tool_config,
file_path=str(endpoints_file),
base_per_time=base_per_time,
)
else:
try:
timeout = int(raw_timeout)
except (TypeError, ValueError) as e:
raise ValueError(
f"工具 {tool_name} 的 timeout 配置无效: {raw_timeout!r}"
) from e
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
log_file = vuln_scan_dir / f"{tool_name}_{timestamp}.log"
# Dalfox XSS 使用流式任务,一边解析一边保存漏洞结果
if tool_name == "dalfox_xss":
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
future = run_and_stream_save_dalfox_vulns_task.submit(
cmd=command,
tool_name=tool_name,
scan_id=scan_id,
target_id=target_id,
cwd=str(vuln_scan_dir),
shell=True,
batch_size=1,
timeout=timeout,
log_file=str(log_file),
)
tool_futures[tool_name] = {
"future": future,
"command": command,
"timeout": timeout,
"log_file": str(log_file),
"mode": "streaming",
}
elif tool_name == "nuclei":
# Nuclei 使用流式任务
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
future = run_and_stream_save_nuclei_vulns_task.submit(
cmd=command,
tool_name=tool_name,
scan_id=scan_id,
target_id=target_id,
cwd=str(vuln_scan_dir),
shell=True,
batch_size=1,
timeout=timeout,
log_file=str(log_file),
)
tool_futures[tool_name] = {
"future": future,
"command": command,
"timeout": timeout,
"log_file": str(log_file),
"mode": "streaming",
}
else:
# 其他工具仍使用非流式执行逻辑
logger.info("开始执行漏洞扫描工具 %s(已提交任务)", tool_name)
future = run_vuln_tool_task.submit(
tool_name=tool_name,
command=command,
timeout=timeout,
log_file=str(log_file),
)
tool_futures[tool_name] = {
"future": future,
"command": command,
"timeout": timeout,
"log_file": str(log_file),
"mode": "normal",
}
# 统一收集所有工具的执行结果
for tool_name, meta in tool_futures.items():
future = meta["future"]
result = future.result()
if meta["mode"] == "streaming":
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"processed_records": result.get("processed_records"),
"created_vulns": result.get("created_vulns"),
"command_log_file": meta["log_file"],
}
else:
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"duration": result.get("duration"),
"returncode": result.get("returncode"),
"command_log_file": result.get("command_log_file"),
}
return {
"success": True,
"scan_id": scan_id,
"target": target_name,
"scan_workspace_dir": scan_workspace_dir,
"endpoints_file": str(endpoints_file),
"endpoint_count": total_endpoints,
"executed_tools": list(enabled_tools.keys()),
"tool_results": tool_results,
}
except Exception as e:
logger.exception("Endpoint 漏洞扫描失败: %s", e)
raise

View File

@@ -1,107 +0,0 @@
from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
from typing import Dict, Tuple
from prefect import flow
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.configs.command_templates import get_command_template
from .endpoints_vuln_scan_flow import endpoints_vuln_scan_flow
logger = logging.getLogger(__name__)
def _classify_vuln_tools(enabled_tools: Dict[str, dict]) -> Tuple[Dict[str, dict], Dict[str, dict]]:
"""根据命令模板中的 input_type 对漏洞扫描工具进行分类。
当前支持:
- endpoints_file: 以端点列表文件为输入(例如 Dalfox XSS
预留:
- 其他 input_type 将被归类到 other_tools暂不处理。
"""
endpoints_tools: Dict[str, dict] = {}
other_tools: Dict[str, dict] = {}
for tool_name, tool_config in enabled_tools.items():
template = get_command_template("vuln_scan", tool_name) or {}
input_type = template.get("input_type", "endpoints_file")
if input_type == "endpoints_file":
endpoints_tools[tool_name] = tool_config
else:
other_tools[tool_name] = tool_config
return endpoints_tools, other_tools
@flow(
name="vuln_scan",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def vuln_scan_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: Dict[str, dict],
) -> dict:
"""漏洞扫描主 Flow串行编排各类漏洞扫描子 Flow。
支持工具:
- dalfox_xss: XSS 漏洞扫描(流式保存)
- nuclei: 通用漏洞扫描(流式保存,支持模板 commit hash 同步)
"""
try:
if scan_id is None:
raise ValueError("scan_id 不能为空")
if not target_name:
raise ValueError("target_name 不能为空")
if target_id is None:
raise ValueError("target_id 不能为空")
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
# Step 1: 分类工具
endpoints_tools, other_tools = _classify_vuln_tools(enabled_tools)
logger.info(
"漏洞扫描工具分类 - endpoints_file: %s, 其他: %s",
list(endpoints_tools.keys()) or "",
list(other_tools.keys()) or "",
)
if other_tools:
logger.warning(
"存在暂不支持输入类型的漏洞扫描工具,将被忽略: %s",
list(other_tools.keys()),
)
if not endpoints_tools:
raise ValueError("漏洞扫描需要至少启用一个以 endpoints_file 为输入的工具(如 dalfox_xss、nuclei")
# Step 2: 执行 Endpoint 漏洞扫描子 Flow串行
endpoint_result = endpoints_vuln_scan_flow(
scan_id=scan_id,
target_name=target_name,
target_id=target_id,
scan_workspace_dir=scan_workspace_dir,
enabled_tools=endpoints_tools,
)
# 目前只有一个子 Flow直接返回其结果
return endpoint_result
except Exception as e:
logger.exception("漏洞扫描主 Flow 失败: %s", e)
raise

View File

@@ -1,182 +0,0 @@
"""
扫描流程处理器
负责处理扫描流程(端口扫描、子域名发现等)的状态变化和通知
职责:
- 更新各阶段的进度状态running/completed/failed
- 发送扫描阶段的通知
- 记录 Flow 性能指标
"""
import logging
from prefect import Flow
from prefect.client.schemas import FlowRun, State
from apps.scan.utils.performance import FlowPerformanceTracker
logger = logging.getLogger(__name__)
# 存储每个 flow_run 的性能追踪器
_flow_trackers: dict[str, FlowPerformanceTracker] = {}
def _get_stage_from_flow_name(flow_name: str) -> str | None:
"""
从 Flow name 获取对应的 stage
Flow name 直接作为 stage与 engine_config 的 key 一致)
排除主 Flowinitiate_scan
"""
# 排除主 Flow它不是阶段 Flow
if flow_name == 'initiate_scan':
return None
return flow_name
def on_scan_flow_running(flow: Flow, flow_run: FlowRun, state: State) -> None:
"""
扫描流程开始运行时的回调
职责:
- 更新阶段进度为 running
- 发送扫描开始通知
- 启动性能追踪
Args:
flow: Prefect Flow 对象
flow_run: Flow 运行实例
state: Flow 当前状态
"""
logger.info("🚀 扫描流程开始运行 - Flow: %s, Run ID: %s", flow.name, flow_run.id)
# 提取流程参数
flow_params = flow_run.parameters or {}
scan_id = flow_params.get('scan_id')
target_name = flow_params.get('target_name', 'unknown')
target_id = flow_params.get('target_id')
# 启动性能追踪
if scan_id:
tracker = FlowPerformanceTracker(flow.name, scan_id)
tracker.start(target_id=target_id, target_name=target_name)
_flow_trackers[str(flow_run.id)] = tracker
# 更新阶段进度
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
try:
from apps.scan.services import ScanService
service = ScanService()
service.start_stage(scan_id, stage)
logger.info(f"✓ 阶段进度已更新为 running - Scan ID: {scan_id}, Stage: {stage}")
except Exception as e:
logger.error(f"更新阶段进度失败 - Scan ID: {scan_id}, Stage: {stage}: {e}")
def on_scan_flow_completed(flow: Flow, flow_run: FlowRun, state: State) -> None:
"""
扫描流程完成时的回调
职责:
- 更新阶段进度为 completed
- 发送扫描完成通知(可选)
- 记录性能指标
Args:
flow: Prefect Flow 对象
flow_run: Flow 运行实例
state: Flow 当前状态
"""
logger.info("✅ 扫描流程完成 - Flow: %s, Run ID: %s", flow.name, flow_run.id)
# 提取流程参数
flow_params = flow_run.parameters or {}
scan_id = flow_params.get('scan_id')
# 获取 flow result
result = None
try:
result = state.result() if state.result else None
except Exception:
pass
# 记录性能指标
tracker = _flow_trackers.pop(str(flow_run.id), None)
if tracker:
tracker.finish(success=True)
# 更新阶段进度
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
try:
from apps.scan.services import ScanService
service = ScanService()
# 从 flow result 中提取 detail如果有
detail = None
if isinstance(result, dict):
detail = result.get('detail')
service.complete_stage(scan_id, stage, detail)
logger.info(f"✓ 阶段进度已更新为 completed - Scan ID: {scan_id}, Stage: {stage}")
# 每个阶段完成后刷新缓存统计,便于前端实时看到增量
try:
service.update_cached_stats(scan_id)
logger.info("✓ 阶段完成后已刷新缓存统计 - Scan ID: %s", scan_id)
except Exception as e:
logger.error("阶段完成后刷新缓存统计失败 - Scan ID: %s, 错误: %s", scan_id, e)
except Exception as e:
logger.error(f"更新阶段进度失败 - Scan ID: {scan_id}, Stage: {stage}: {e}")
def on_scan_flow_failed(flow: Flow, flow_run: FlowRun, state: State) -> None:
"""
扫描流程失败时的回调
职责:
- 更新阶段进度为 failed
- 发送扫描失败通知
- 记录性能指标(含错误信息)
Args:
flow: Prefect Flow 对象
flow_run: Flow 运行实例
state: Flow 当前状态
"""
logger.info("❌ 扫描流程失败 - Flow: %s, Run ID: %s", flow.name, flow_run.id)
# 提取流程参数
flow_params = flow_run.parameters or {}
scan_id = flow_params.get('scan_id')
target_name = flow_params.get('target_name', 'unknown')
# 提取错误信息
error_message = str(state.message) if state.message else "未知错误"
# 记录性能指标(失败情况)
tracker = _flow_trackers.pop(str(flow_run.id), None)
if tracker:
tracker.finish(success=False, error_message=error_message)
# 更新阶段进度
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
try:
from apps.scan.services import ScanService
service = ScanService()
service.fail_stage(scan_id, stage, error_message)
logger.info(f"✓ 阶段进度已更新为 failed - Scan ID: {scan_id}, Stage: {stage}")
except Exception as e:
logger.error(f"更新阶段进度失败 - Scan ID: {scan_id}, Stage: {stage}: {e}")
# 发送通知
try:
from apps.scan.notifications import create_notification, NotificationLevel
message = f"任务:{flow.name}\n状态:执行失败\n错误:{error_message}"
create_notification(
title=target_name,
message=message,
level=NotificationLevel.HIGH
)
logger.error(f"✓ 扫描失败通知已发送 - Target: {target_name}, Flow: {flow.name}, Error: {error_message}")
except Exception as e:
logger.error(f"发送扫描失败通知失败 - Flow: {flow.name}: {e}")

View File

@@ -1,180 +0,0 @@
from django.db import models
from django.contrib.postgres.fields import ArrayField
from ..common.definitions import ScanStatus
class SoftDeleteManager(models.Manager):
"""软删除管理器:默认只返回未删除的记录"""
def get_queryset(self):
return super().get_queryset().filter(deleted_at__isnull=True)
class Scan(models.Model):
"""扫描任务模型"""
id = models.AutoField(primary_key=True)
target = models.ForeignKey('targets.Target', on_delete=models.CASCADE, related_name='scans', help_text='扫描目标')
engine = models.ForeignKey(
'engine.ScanEngine',
on_delete=models.CASCADE,
related_name='scans',
help_text='使用的扫描引擎'
)
created_at = models.DateTimeField(auto_now_add=True, help_text='任务创建时间')
stopped_at = models.DateTimeField(null=True, blank=True, help_text='扫描结束时间')
status = models.CharField(
max_length=20,
choices=ScanStatus.choices,
default=ScanStatus.INITIATED,
db_index=True,
help_text='任务状态'
)
results_dir = models.CharField(max_length=100, blank=True, default='', help_text='结果存储目录')
container_ids = ArrayField(
models.CharField(max_length=100),
blank=True,
default=list,
help_text='容器 ID 列表Docker Container ID'
)
worker = models.ForeignKey(
'engine.WorkerNode',
on_delete=models.SET_NULL,
related_name='scans',
null=True,
blank=True,
help_text='执行扫描的 Worker 节点'
)
error_message = models.CharField(max_length=2000, blank=True, default='', help_text='错误信息')
# ==================== 软删除字段 ====================
deleted_at = models.DateTimeField(null=True, blank=True, db_index=True, help_text='删除时间NULL表示未删除')
# ==================== 管理器 ====================
objects = SoftDeleteManager() # 默认管理器:只返回未删除的记录
all_objects = models.Manager() # 全量管理器:包括已删除的记录(用于硬删除)
# ==================== 进度跟踪字段 ====================
progress = models.IntegerField(default=0, help_text='扫描进度 0-100')
current_stage = models.CharField(max_length=50, blank=True, default='', help_text='当前扫描阶段')
stage_progress = models.JSONField(default=dict, help_text='各阶段进度详情')
# ==================== 缓存统计字段 ====================
cached_subdomains_count = models.IntegerField(default=0, help_text='缓存的子域名数量')
cached_websites_count = models.IntegerField(default=0, help_text='缓存的网站数量')
cached_endpoints_count = models.IntegerField(default=0, help_text='缓存的端点数量')
cached_ips_count = models.IntegerField(default=0, help_text='缓存的IP地址数量')
cached_directories_count = models.IntegerField(default=0, help_text='缓存的目录数量')
cached_vulns_total = models.IntegerField(default=0, help_text='缓存的漏洞总数')
cached_vulns_critical = models.IntegerField(default=0, help_text='缓存的严重漏洞数量')
cached_vulns_high = models.IntegerField(default=0, help_text='缓存的高危漏洞数量')
cached_vulns_medium = models.IntegerField(default=0, help_text='缓存的中危漏洞数量')
cached_vulns_low = models.IntegerField(default=0, help_text='缓存的低危漏洞数量')
stats_updated_at = models.DateTimeField(null=True, blank=True, help_text='统计数据最后更新时间')
class Meta:
db_table = 'scan'
verbose_name = '扫描任务'
verbose_name_plural = '扫描任务'
ordering = ['-created_at']
indexes = [
models.Index(fields=['-created_at']), # 优化按创建时间降序排序list 查询的默认排序)
models.Index(fields=['target']), # 优化按目标查询扫描任务
models.Index(fields=['deleted_at', '-created_at']), # 软删除 + 时间索引
]
def __str__(self):
return f"Scan #{self.id} - {self.target.name}"
class ScheduledScan(models.Model):
"""
定时扫描任务模型
调度机制:
- APScheduler 每分钟检查 next_run_time
- 到期任务通过 task_distributor 分发到 Worker 执行
- 支持 cron 表达式进行灵活调度
扫描模式(二选一):
- 组织扫描:设置 organization执行时动态获取组织下所有目标
- 目标扫描:设置 target扫描单个目标
- organization 优先级高于 target
"""
id = models.AutoField(primary_key=True)
# 基本信息
name = models.CharField(max_length=200, help_text='任务名称')
# 关联的扫描引擎
engine = models.ForeignKey(
'engine.ScanEngine',
on_delete=models.CASCADE,
related_name='scheduled_scans',
help_text='使用的扫描引擎'
)
# 关联的组织(组织扫描模式:执行时动态获取组织下所有目标)
organization = models.ForeignKey(
'targets.Organization',
on_delete=models.CASCADE,
related_name='scheduled_scans',
null=True,
blank=True,
help_text='扫描组织(设置后执行时动态获取组织下所有目标)'
)
# 关联的目标(目标扫描模式:扫描单个目标)
target = models.ForeignKey(
'targets.Target',
on_delete=models.CASCADE,
related_name='scheduled_scans',
null=True,
blank=True,
help_text='扫描单个目标(与 organization 二选一)'
)
# 调度配置 - 直接使用 Cron 表达式
cron_expression = models.CharField(
max_length=100,
default='0 2 * * *',
help_text='Cron 表达式,格式:分 时 日 月 周'
)
# 状态
is_enabled = models.BooleanField(default=True, db_index=True, help_text='是否启用')
# 执行统计
run_count = models.IntegerField(default=0, help_text='已执行次数')
last_run_time = models.DateTimeField(null=True, blank=True, help_text='上次执行时间')
next_run_time = models.DateTimeField(null=True, blank=True, help_text='下次执行时间')
# 时间戳
created_at = models.DateTimeField(auto_now_add=True, help_text='创建时间')
updated_at = models.DateTimeField(auto_now=True, help_text='更新时间')
class Meta:
db_table = 'scheduled_scan'
verbose_name = '定时扫描任务'
verbose_name_plural = '定时扫描任务'
ordering = ['-created_at']
indexes = [
models.Index(fields=['-created_at']),
models.Index(fields=['is_enabled', '-created_at']),
models.Index(fields=['name']), # 优化 name 搜索
]
def __str__(self):
return f"ScheduledScan #{self.id} - {self.name}"

View File

@@ -1,189 +0,0 @@
#!/usr/bin/env python
"""
扫描任务启动脚本
用于动态扫描容器启动时执行。
必须在 Django 导入之前获取配置并设置环境变量。
"""
import argparse
import sys
import os
import traceback
def diagnose_prefect_environment():
"""诊断 Prefect 运行环境,输出详细信息用于排查问题"""
print("\n" + "="*60)
print("Prefect 环境诊断")
print("="*60)
# 1. 检查 Prefect 相关环境变量
print("\n[诊断] Prefect 环境变量:")
prefect_vars = [
'PREFECT_HOME',
'PREFECT_API_URL',
'PREFECT_SERVER_EPHEMERAL_ENABLED',
'PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS',
'PREFECT_SERVER_DATABASE_CONNECTION_URL',
'PREFECT_LOGGING_LEVEL',
'PREFECT_DEBUG_MODE',
]
for var in prefect_vars:
value = os.environ.get(var, 'NOT SET')
print(f" {var}={value}")
# 2. 检查 PREFECT_HOME 目录
prefect_home = os.environ.get('PREFECT_HOME', os.path.expanduser('~/.prefect'))
print(f"\n[诊断] PREFECT_HOME 目录: {prefect_home}")
if os.path.exists(prefect_home):
print(f" ✓ 目录存在")
print(f" 可写: {os.access(prefect_home, os.W_OK)}")
try:
files = os.listdir(prefect_home)
print(f" 文件列表: {files[:10]}{'...' if len(files) > 10 else ''}")
except Exception as e:
print(f" ✗ 无法列出文件: {e}")
else:
print(f" 目录不存在,尝试创建...")
try:
os.makedirs(prefect_home, exist_ok=True)
print(f" ✓ 创建成功")
except Exception as e:
print(f" ✗ 创建失败: {e}")
# 3. 检查 uvicorn 是否可用
print(f"\n[诊断] uvicorn 可用性:")
import shutil
uvicorn_path = shutil.which('uvicorn')
if uvicorn_path:
print(f" ✓ uvicorn 路径: {uvicorn_path}")
else:
print(f" ✗ uvicorn 不在 PATH 中")
print(f" PATH: {os.environ.get('PATH', 'NOT SET')}")
# 4. 检查 Prefect 版本
print(f"\n[诊断] Prefect 版本:")
try:
import prefect
print(f" ✓ prefect=={prefect.__version__}")
except Exception as e:
print(f" ✗ 无法导入 prefect: {e}")
# 5. 检查 SQLite 支持
print(f"\n[诊断] SQLite 支持:")
try:
import sqlite3
print(f" ✓ sqlite3 版本: {sqlite3.sqlite_version}")
# 测试创建数据库
test_db = os.path.join(prefect_home, 'test.db')
conn = sqlite3.connect(test_db)
conn.execute('CREATE TABLE IF NOT EXISTS test (id INTEGER)')
conn.close()
os.remove(test_db)
print(f" ✓ SQLite 读写测试通过")
except Exception as e:
print(f" ✗ SQLite 测试失败: {e}")
# 6. 检查端口绑定能力
print(f"\n[诊断] 端口绑定测试:")
try:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('127.0.0.1', 0))
port = sock.getsockname()[1]
sock.close()
print(f" ✓ 可以绑定 127.0.0.1 端口 (测试端口: {port})")
except Exception as e:
print(f" ✗ 端口绑定失败: {e}")
# 7. 检查内存情况
print(f"\n[诊断] 系统资源:")
try:
import psutil
mem = psutil.virtual_memory()
print(f" 内存总量: {mem.total / 1024 / 1024:.0f} MB")
print(f" 可用内存: {mem.available / 1024 / 1024:.0f} MB")
print(f" 内存使用率: {mem.percent}%")
except ImportError:
print(f" psutil 未安装,跳过内存检查")
except Exception as e:
print(f" ✗ 资源检查失败: {e}")
print("\n" + "="*60)
print("诊断完成")
print("="*60 + "\n")
def main():
print("="*60)
print("run_initiate_scan.py 启动")
print(f" Python: {sys.version}")
print(f" CWD: {os.getcwd()}")
print(f" SERVER_URL: {os.environ.get('SERVER_URL', 'NOT SET')}")
print("="*60)
# 1. 从配置中心获取配置并初始化 Django必须在 Django 导入之前)
print("[1/4] 从配置中心获取配置...")
try:
from apps.common.container_bootstrap import fetch_config_and_setup_django
fetch_config_and_setup_django()
print("[1/4] ✓ 配置获取成功")
except Exception as e:
print(f"[1/4] ✗ 配置获取失败: {e}")
traceback.print_exc()
sys.exit(1)
# 2. 解析命令行参数
print("[2/4] 解析命令行参数...")
parser = argparse.ArgumentParser(description="执行扫描初始化 Flow")
parser.add_argument("--scan_id", type=int, required=True, help="扫描任务 ID")
parser.add_argument("--target_name", type=str, required=True, help="目标名称")
parser.add_argument("--target_id", type=int, required=True, help="目标 ID")
parser.add_argument("--scan_workspace_dir", type=str, required=True, help="扫描工作目录")
parser.add_argument("--engine_name", type=str, required=True, help="引擎名称")
parser.add_argument("--scheduled_scan_name", type=str, default=None, help="定时扫描任务名称(可选)")
args = parser.parse_args()
print(f"[2/4] ✓ 参数解析成功:")
print(f" scan_id: {args.scan_id}")
print(f" target_name: {args.target_name}")
print(f" target_id: {args.target_id}")
print(f" scan_workspace_dir: {args.scan_workspace_dir}")
print(f" engine_name: {args.engine_name}")
print(f" scheduled_scan_name: {args.scheduled_scan_name}")
# 2.5. 运行 Prefect 环境诊断(仅在 DEBUG 模式下)
if os.environ.get('DEBUG', '').lower() == 'true':
diagnose_prefect_environment()
# 3. 现在可以安全导入 Django 相关模块
print("[3/4] 导入 initiate_scan_flow...")
try:
from apps.scan.flows.initiate_scan_flow import initiate_scan_flow
print("[3/4] ✓ 导入成功")
except Exception as e:
print(f"[3/4] ✗ 导入失败: {e}")
traceback.print_exc()
sys.exit(1)
# 4. 执行 Flow
print("[4/4] 执行 initiate_scan_flow...")
try:
result = initiate_scan_flow(
scan_id=args.scan_id,
target_name=args.target_name,
target_id=args.target_id,
scan_workspace_dir=args.scan_workspace_dir,
engine_name=args.engine_name,
scheduled_scan_name=args.scheduled_scan_name,
)
print("[4/4] ✓ Flow 执行完成")
print(f"结果: {result}")
except Exception as e:
print(f"[4/4] ✗ Flow 执行失败: {e}")
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,245 +0,0 @@
from rest_framework import serializers
from django.db.models import Count
from .models import Scan, ScheduledScan
class ScanSerializer(serializers.ModelSerializer):
"""扫描任务序列化器"""
target_name = serializers.SerializerMethodField()
engine_name = serializers.SerializerMethodField()
class Meta:
model = Scan
fields = [
'id', 'target', 'target_name', 'engine', 'engine_name',
'created_at', 'stopped_at', 'status', 'results_dir',
'container_ids', 'error_message'
]
read_only_fields = [
'id', 'created_at', 'stopped_at', 'results_dir',
'container_ids', 'error_message', 'status'
]
def get_target_name(self, obj):
"""获取目标名称"""
return obj.target.name if obj.target else None
def get_engine_name(self, obj):
"""获取引擎名称"""
return obj.engine.name if obj.engine else None
class ScanHistorySerializer(serializers.ModelSerializer):
"""扫描历史列表专用序列化器
为前端扫描历史页面提供优化的数据格式,包括:
- 扫描汇总统计(子域名、端点、漏洞数量)
- 进度百分比和当前阶段
"""
# 字段映射
target_name = serializers.CharField(source='target.name', read_only=True)
engine_name = serializers.CharField(source='engine.name', read_only=True)
# 计算字段
summary = serializers.SerializerMethodField()
# 进度跟踪字段(直接从模型读取)
progress = serializers.IntegerField(read_only=True)
current_stage = serializers.CharField(read_only=True)
stage_progress = serializers.JSONField(read_only=True)
class Meta:
model = Scan
fields = [
'id', 'target', 'target_name', 'engine', 'engine_name',
'created_at', 'status', 'error_message', 'summary', 'progress',
'current_stage', 'stage_progress'
]
def get_summary(self, obj):
"""获取扫描汇总数据。
设计原则:
- 子域名/网站/端点/IP/目录使用缓存字段(避免实时 COUNT
- 漏洞统计使用 Scan 上的缓存字段,在扫描结束时统一聚合
"""
# 1. 使用缓存字段构建基础统计子域名、网站、端点、IP、目录
summary = {
'subdomains': obj.cached_subdomains_count or 0,
'websites': obj.cached_websites_count or 0,
'endpoints': obj.cached_endpoints_count or 0,
'ips': obj.cached_ips_count or 0,
'directories': obj.cached_directories_count or 0,
}
# 2. 使用 Scan 模型上的缓存漏洞统计(按严重性聚合)
summary['vulnerabilities'] = {
'total': obj.cached_vulns_total or 0,
'critical': obj.cached_vulns_critical or 0,
'high': obj.cached_vulns_high or 0,
'medium': obj.cached_vulns_medium or 0,
'low': obj.cached_vulns_low or 0,
}
return summary
class QuickScanSerializer(serializers.Serializer):
"""
快速扫描序列化器
功能:
- 接收目标列表和引擎配置
- 自动创建/获取目标
- 立即发起扫描
"""
# 批量创建的最大数量限制
MAX_BATCH_SIZE = 1000
# 目标列表
targets = serializers.ListField(
child=serializers.DictField(),
help_text='目标列表,每个目标包含 name 字段'
)
# 扫描引擎 ID
engine_id = serializers.IntegerField(
required=True,
help_text='使用的扫描引擎 ID (必填)'
)
def validate_targets(self, value):
"""验证目标列表"""
if not value:
raise serializers.ValidationError("目标列表不能为空")
# 检查数量限制,防止服务器过载
if len(value) > self.MAX_BATCH_SIZE:
raise serializers.ValidationError(
f"快速扫描最多支持 {self.MAX_BATCH_SIZE} 个目标,当前提交了 {len(value)}"
)
# 验证每个目标的必填字段
for idx, target in enumerate(value):
if 'name' not in target:
raise serializers.ValidationError(f"{idx + 1} 个目标缺少 name 字段")
if not target['name']:
raise serializers.ValidationError(f"{idx + 1} 个目标的 name 不能为空")
return value
# ==================== 定时扫描序列化器 ====================
class ScheduledScanSerializer(serializers.ModelSerializer):
"""定时扫描任务序列化器(用于列表和详情)"""
# 关联字段
engine_name = serializers.CharField(source='engine.name', read_only=True)
organization_id = serializers.IntegerField(source='organization.id', read_only=True, allow_null=True)
organization_name = serializers.CharField(source='organization.name', read_only=True, allow_null=True)
target_id = serializers.IntegerField(source='target.id', read_only=True, allow_null=True)
target_name = serializers.CharField(source='target.name', read_only=True, allow_null=True)
scan_mode = serializers.SerializerMethodField()
class Meta:
model = ScheduledScan
fields = [
'id', 'name',
'engine', 'engine_name',
'organization_id', 'organization_name',
'target_id', 'target_name',
'scan_mode',
'cron_expression',
'is_enabled',
'run_count', 'last_run_time', 'next_run_time',
'created_at', 'updated_at'
]
read_only_fields = [
'id', 'run_count',
'last_run_time', 'next_run_time',
'created_at', 'updated_at'
]
def get_scan_mode(self, obj):
"""获取扫描模式organization 或 target"""
return 'organization' if obj.organization_id else 'target'
class CreateScheduledScanSerializer(serializers.Serializer):
"""创建定时扫描任务序列化器
扫描模式(二选一):
- 组织扫描:提供 organization_id执行时动态获取组织下所有目标
- 目标扫描:提供 target_id扫描单个目标
"""
name = serializers.CharField(max_length=200, help_text='任务名称')
engine_id = serializers.IntegerField(help_text='扫描引擎 ID')
# 组织扫描模式
organization_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='组织 ID组织扫描模式执行时动态获取组织下所有目标'
)
# 目标扫描模式
target_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='目标 ID目标扫描模式扫描单个目标'
)
cron_expression = serializers.CharField(
max_length=100,
default='0 2 * * *',
help_text='Cron 表达式,格式:分 时 日 月 周'
)
is_enabled = serializers.BooleanField(default=True, help_text='是否立即启用')
def validate(self, data):
"""验证 organization_id 和 target_id 互斥"""
organization_id = data.get('organization_id')
target_id = data.get('target_id')
if not organization_id and not target_id:
raise serializers.ValidationError('必须提供 organization_id 或 target_id 其中之一')
if organization_id and target_id:
raise serializers.ValidationError('organization_id 和 target_id 只能提供其中之一')
return data
class UpdateScheduledScanSerializer(serializers.Serializer):
"""更新定时扫描任务序列化器"""
name = serializers.CharField(max_length=200, required=False, help_text='任务名称')
engine_id = serializers.IntegerField(required=False, help_text='扫描引擎 ID')
# 组织扫描模式
organization_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='组织 ID设置后清空 target_id'
)
# 目标扫描模式
target_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='目标 ID设置后清空 organization_id'
)
cron_expression = serializers.CharField(max_length=100, required=False, help_text='Cron 表达式')
is_enabled = serializers.BooleanField(required=False, help_text='是否启用')
class ToggleScheduledScanSerializer(serializers.Serializer):
"""切换定时扫描启用状态序列化器"""
is_enabled = serializers.BooleanField(help_text='是否启用')

View File

@@ -1,295 +0,0 @@
"""
快速扫描服务
负责解析用户输入URL、域名、IP、CIDR并创建对应的资产数据
"""
import logging
from dataclasses import dataclass
from typing import Optional, Literal, List, Dict, Any
from urllib.parse import urlparse
from django.db import transaction
from apps.common.validators import validate_url, detect_input_type, validate_domain, validate_ip, validate_cidr, is_valid_ip
from apps.targets.services.target_service import TargetService
from apps.targets.models import Target
from apps.asset.dtos import WebSiteDTO
from apps.asset.dtos.asset import EndpointDTO
from apps.asset.repositories.asset.website_repository import DjangoWebSiteRepository
from apps.asset.repositories.asset.endpoint_repository import DjangoEndpointRepository
logger = logging.getLogger(__name__)
@dataclass
class ParsedInputDTO:
"""
解析输入 DTO
只在快速扫描流程中使用
"""
original_input: str
input_type: Literal['url', 'domain', 'ip', 'cidr']
target_name: str # host/domain/ip/cidr
target_type: Literal['domain', 'ip', 'cidr']
website_url: Optional[str] = None # 根 URLscheme://host[:port]
endpoint_url: Optional[str] = None # 完整 URL含路径
is_valid: bool = True
error: Optional[str] = None
line_number: Optional[int] = None
class QuickScanService:
"""快速扫描服务 - 解析输入并创建资产"""
def __init__(self):
self.target_service = TargetService()
self.website_repo = DjangoWebSiteRepository()
self.endpoint_repo = DjangoEndpointRepository()
def parse_inputs(self, inputs: List[str]) -> List[ParsedInputDTO]:
"""
解析多行输入
Args:
inputs: 输入字符串列表(每行一个)
Returns:
解析结果列表(跳过空行)
"""
results = []
for line_number, input_str in enumerate(inputs, start=1):
input_str = input_str.strip()
# 空行跳过
if not input_str:
continue
try:
# 检测输入类型
input_type = detect_input_type(input_str)
if input_type == 'url':
dto = self._parse_url_input(input_str, line_number)
else:
dto = self._parse_target_input(input_str, input_type, line_number)
results.append(dto)
except ValueError as e:
# 解析失败,记录错误
results.append(ParsedInputDTO(
original_input=input_str,
input_type='domain', # 默认类型
target_name=input_str,
target_type='domain',
is_valid=False,
error=str(e),
line_number=line_number
))
return results
def _parse_url_input(self, url_str: str, line_number: int) -> ParsedInputDTO:
"""
解析 URL 输入
Args:
url_str: URL 字符串
line_number: 行号
Returns:
ParsedInputDTO
"""
# 验证 URL 格式
validate_url(url_str)
# 使用标准库解析
parsed = urlparse(url_str)
host = parsed.hostname # 不含端口
has_path = parsed.path and parsed.path != '/'
# 构建 root_url: scheme://host[:port]
root_url = f"{parsed.scheme}://{parsed.netloc}"
# 检测 host 类型domain 或 ip
target_type = 'ip' if is_valid_ip(host) else 'domain'
return ParsedInputDTO(
original_input=url_str,
input_type='url',
target_name=host,
target_type=target_type,
website_url=root_url,
endpoint_url=url_str if has_path else None,
line_number=line_number
)
def _parse_target_input(
self,
input_str: str,
input_type: str,
line_number: int
) -> ParsedInputDTO:
"""
解析非 URL 输入domain/ip/cidr
Args:
input_str: 输入字符串
input_type: 输入类型
line_number: 行号
Returns:
ParsedInputDTO
"""
# 验证格式
if input_type == 'domain':
validate_domain(input_str)
target_type = 'domain'
elif input_type == 'ip':
validate_ip(input_str)
target_type = 'ip'
elif input_type == 'cidr':
validate_cidr(input_str)
target_type = 'cidr'
else:
raise ValueError(f"未知的输入类型: {input_type}")
return ParsedInputDTO(
original_input=input_str,
input_type=input_type,
target_name=input_str,
target_type=target_type,
website_url=None,
endpoint_url=None,
line_number=line_number
)
@transaction.atomic
def process_quick_scan(
self,
inputs: List[str],
engine_id: int
) -> Dict[str, Any]:
"""
处理快速扫描请求
Args:
inputs: 输入字符串列表
engine_id: 扫描引擎 ID
Returns:
处理结果字典
"""
# 1. 解析输入
parsed_inputs = self.parse_inputs(inputs)
# 分离有效和无效输入
valid_inputs = [p for p in parsed_inputs if p.is_valid]
invalid_inputs = [p for p in parsed_inputs if not p.is_valid]
if not valid_inputs:
return {
'targets': [],
'target_stats': {'created': 0, 'reused': 0, 'failed': len(invalid_inputs)},
'asset_stats': {'websites_created': 0, 'endpoints_created': 0},
'errors': [
{'line_number': p.line_number, 'input': p.original_input, 'error': p.error}
for p in invalid_inputs
]
}
# 2. 创建资产
asset_result = self.create_assets_from_parsed_inputs(valid_inputs)
# 3. 返回结果
return {
'targets': asset_result['targets'],
'target_stats': asset_result['target_stats'],
'asset_stats': asset_result['asset_stats'],
'errors': [
{'line_number': p.line_number, 'input': p.original_input, 'error': p.error}
for p in invalid_inputs
]
}
def create_assets_from_parsed_inputs(
self,
parsed_inputs: List[ParsedInputDTO]
) -> Dict[str, Any]:
"""
从解析结果创建资产
Args:
parsed_inputs: 解析结果列表(只包含有效输入)
Returns:
创建结果字典
"""
# 1. 收集所有 target 数据(内存操作,去重)
targets_data = {}
for dto in parsed_inputs:
if dto.target_name not in targets_data:
targets_data[dto.target_name] = {'name': dto.target_name, 'type': dto.target_type}
targets_list = list(targets_data.values())
# 2. 批量创建 Target复用现有方法
target_result = self.target_service.batch_create_targets(targets_list)
# 3. 查询刚创建的 Target建立 name → id 映射
target_names = [d['name'] for d in targets_list]
targets = Target.objects.filter(name__in=target_names)
target_id_map = {t.name: t.id for t in targets}
# 4. 收集 Website DTO内存操作去重
website_dtos = []
seen_websites = set()
for dto in parsed_inputs:
if dto.website_url and dto.website_url not in seen_websites:
seen_websites.add(dto.website_url)
target_id = target_id_map.get(dto.target_name)
if target_id:
website_dtos.append(WebSiteDTO(
target_id=target_id,
url=dto.website_url,
host=dto.target_name
))
# 5. 批量创建 Website存在即跳过
websites_created = 0
if website_dtos:
websites_created = self.website_repo.bulk_create_ignore_conflicts(website_dtos)
# 6. 收集 Endpoint DTO内存操作去重
endpoint_dtos = []
seen_endpoints = set()
for dto in parsed_inputs:
if dto.endpoint_url and dto.endpoint_url not in seen_endpoints:
seen_endpoints.add(dto.endpoint_url)
target_id = target_id_map.get(dto.target_name)
if target_id:
endpoint_dtos.append(EndpointDTO(
target_id=target_id,
url=dto.endpoint_url,
host=dto.target_name
))
# 7. 批量创建 Endpoint存在即跳过
endpoints_created = 0
if endpoint_dtos:
endpoints_created = self.endpoint_repo.bulk_create_ignore_conflicts(endpoint_dtos)
return {
'targets': list(targets),
'target_stats': {
'created': target_result['created_count'],
'reused': 0, # bulk_create 无法区分新建和复用
'failed': target_result['failed_count']
},
'asset_stats': {
'websites_created': websites_created,
'endpoints_created': endpoints_created
}
}

View File

@@ -1,238 +0,0 @@
"""
扫描任务服务
负责 Scan 模型的所有业务逻辑
"""
from __future__ import annotations
import logging
import uuid
from typing import Dict, List, TYPE_CHECKING
from datetime import datetime
from pathlib import Path
from django.conf import settings
from django.db import transaction
from django.db.utils import DatabaseError, IntegrityError, OperationalError
from django.core.exceptions import ValidationError, ObjectDoesNotExist
from apps.scan.models import Scan
from apps.scan.repositories import DjangoScanRepository
from apps.targets.repositories import DjangoTargetRepository, DjangoOrganizationRepository
from apps.engine.repositories import DjangoEngineRepository
from apps.targets.models import Target
from apps.engine.models import ScanEngine
from apps.common.definitions import ScanStatus
logger = logging.getLogger(__name__)
class ScanService:
"""
扫描任务服务(协调者)
职责:
- 协调各个子服务
- 提供统一的公共接口
- 保持向后兼容
注意:
- 具体业务逻辑已拆分到子服务
- 本类主要负责委托和协调
"""
# 终态集合:这些状态一旦设置,不应该被覆盖
FINAL_STATUSES = {
ScanStatus.COMPLETED,
ScanStatus.FAILED,
ScanStatus.CANCELLED
}
def __init__(self):
"""
初始化服务
"""
# 初始化子服务
from apps.scan.services.scan_creation_service import ScanCreationService
from apps.scan.services.scan_state_service import ScanStateService
from apps.scan.services.scan_control_service import ScanControlService
from apps.scan.services.scan_stats_service import ScanStatsService
self.creation_service = ScanCreationService()
self.state_service = ScanStateService()
self.control_service = ScanControlService()
self.stats_service = ScanStatsService()
# 保留 ScanRepository用于 get_scan 方法)
self.scan_repo = DjangoScanRepository()
def get_scan(self, scan_id: int, prefetch_relations: bool) -> Scan | None:
"""
获取扫描任务(包含关联对象)
自动预加载 engine 和 target避免 N+1 查询问题
Args:
scan_id: 扫描任务 ID
Returns:
Scan 对象(包含 engine 和 target或 None
"""
return self.scan_repo.get_by_id(scan_id, prefetch_relations)
def get_all_scans(self, prefetch_relations: bool = True):
return self.scan_repo.get_all(prefetch_relations=prefetch_relations)
def prepare_initiate_scan(
self,
organization_id: int | None = None,
target_id: int | None = None,
engine_id: int | None = None
) -> tuple[List[Target], ScanEngine]:
"""
为创建扫描任务做准备,返回所需的目标列表和扫描引擎
"""
return self.creation_service.prepare_initiate_scan(
organization_id, target_id, engine_id
)
def create_scans(
self,
targets: List[Target],
engine: ScanEngine,
scheduled_scan_name: str | None = None
) -> List[Scan]:
"""批量创建扫描任务(委托给 ScanCreationService"""
return self.creation_service.create_scans(targets, engine, scheduled_scan_name)
# ==================== 状态管理方法(委托给 ScanStateService ====================
def update_status(
self,
scan_id: int,
status: ScanStatus,
error_message: str | None = None,
stopped_at: datetime | None = None
) -> bool:
"""更新 Scan 状态(委托给 ScanStateService"""
return self.state_service.update_status(
scan_id, status, error_message, stopped_at
)
def update_status_if_match(
self,
scan_id: int,
current_status: ScanStatus,
new_status: ScanStatus,
stopped_at: datetime | None = None
) -> bool:
"""条件更新 Scan 状态(委托给 ScanStateService"""
return self.state_service.update_status_if_match(
scan_id, current_status, new_status, stopped_at
)
def update_cached_stats(self, scan_id: int) -> dict | None:
"""更新缓存统计数据(委托给 ScanStateService返回统计数据字典"""
return self.state_service.update_cached_stats(scan_id)
# ==================== 进度跟踪方法(委托给 ScanStateService ====================
def init_stage_progress(self, scan_id: int, stages: list[str]) -> bool:
"""初始化阶段进度(委托给 ScanStateService"""
return self.state_service.init_stage_progress(scan_id, stages)
def start_stage(self, scan_id: int, stage: str) -> bool:
"""开始执行某个阶段(委托给 ScanStateService"""
return self.state_service.start_stage(scan_id, stage)
def complete_stage(self, scan_id: int, stage: str, detail: str | None = None) -> bool:
"""完成某个阶段(委托给 ScanStateService"""
return self.state_service.complete_stage(scan_id, stage, detail)
def fail_stage(self, scan_id: int, stage: str, error: str | None = None) -> bool:
"""标记某个阶段失败(委托给 ScanStateService"""
return self.state_service.fail_stage(scan_id, stage, error)
def cancel_running_stages(self, scan_id: int, final_status: str = "cancelled") -> bool:
"""取消所有正在运行的阶段(委托给 ScanStateService"""
return self.state_service.cancel_running_stages(scan_id, final_status)
# TODO待接入
def add_command_to_scan(self, scan_id: int, stage_name: str, tool_name: str, command: str) -> bool:
"""
增量添加命令到指定扫描阶段
Args:
scan_id: 扫描任务ID
stage_name: 阶段名称(如 'subdomain_discovery', 'port_scan'
tool_name: 工具名称
command: 执行命令
Returns:
bool: 是否成功添加
"""
try:
scan = self.get_scan(scan_id, prefetch_relations=False)
if not scan:
logger.error(f"扫描任务不存在: {scan_id}")
return False
stage_progress = scan.stage_progress or {}
# 确保指定阶段存在
if stage_name not in stage_progress:
stage_progress[stage_name] = {'status': 'running', 'commands': []}
# 确保 commands 列表存在
if 'commands' not in stage_progress[stage_name]:
stage_progress[stage_name]['commands'] = []
# 增量添加命令
command_entry = f"{tool_name}: {command}"
stage_progress[stage_name]['commands'].append(command_entry)
scan.stage_progress = stage_progress
scan.save(update_fields=['stage_progress'])
command_count = len(stage_progress[stage_name]['commands'])
logger.info(f"✓ 记录命令: {stage_name}.{tool_name} (总计: {command_count})")
return True
except Exception as e:
logger.error(f"记录命令失败: {e}")
return False
# ==================== 删除和控制方法(委托给 ScanControlService ====================
def delete_scans_two_phase(self, scan_ids: List[int]) -> dict:
"""两阶段删除扫描任务(委托给 ScanControlService"""
return self.control_service.delete_scans_two_phase(scan_ids)
def stop_scan(self, scan_id: int) -> tuple[bool, int]:
"""停止扫描任务(委托给 ScanControlService"""
return self.control_service.stop_scan(scan_id)
def hard_delete_scans(self, scan_ids: List[int]) -> tuple[int, Dict[str, int]]:
"""
硬删除扫描任务(真正删除数据)
用于 Worker 容器中执行,删除已软删除的扫描及其关联数据。
Args:
scan_ids: 扫描任务 ID 列表
Returns:
(删除数量, 详情字典)
"""
return self.scan_repo.hard_delete_by_ids(scan_ids)
# ==================== 统计方法(委托给 ScanStatsService ====================
def get_statistics(self) -> dict:
"""获取扫描统计数据(委托给 ScanStatsService"""
return self.stats_service.get_statistics()
# 导出接口
__all__ = ['ScanService']

View File

@@ -1,182 +0,0 @@
"""
导出站点 URL 到 TXT 文件的 Task
使用流式处理,避免大量站点导致内存溢出
支持默认值模式:如果没有站点,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://target_name
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
"""
import logging
import ipaddress
from pathlib import Path
from prefect import task
from apps.asset.repositories import DjangoWebSiteRepository
from apps.targets.services import TargetService
from apps.targets.models import Target
logger = logging.getLogger(__name__)
@task(name="export_sites")
def export_sites_task(
target_id: int,
output_file: str,
batch_size: int = 1000,
target_name: str = None
) -> dict:
"""
导出目标下的所有站点 URL 到 TXT 文件
使用流式处理支持大规模数据导出10万+站点)
支持默认值模式:如果没有站点,自动使用默认站点 URLhttp(s)://target_name
Args:
target_id: 目标 ID
output_file: 输出文件路径(绝对路径)
batch_size: 每次读取的批次大小,默认 1000
target_name: 目标名称(用于默认值模式)
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int
}
Raises:
ValueError: 参数错误
IOError: 文件写入失败
"""
try:
# 初始化 Repository
repository = DjangoWebSiteRepository()
logger.info("开始导出站点 URL - Target ID: %d, 输出文件: %s", target_id, output_file)
# 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 使用 Repository 流式查询站点 URL
url_iterator = repository.get_urls_for_export(
target_id=target_id,
batch_size=batch_size
)
# 流式写入文件
total_count = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url in url_iterator:
# 每次只处理一个 URL边读边写
f.write(f"{url}\n")
total_count += 1
# 每写入 10000 条记录打印一次进度
if total_count % 10000 == 0:
logger.info("已导出 %d 个站点 URL...", total_count)
# ==================== 懒加载模式:根据 Target 类型生成默认 URL ====================
if total_count == 0:
total_count = _write_default_urls(target_id, target_name, output_path)
logger.info(
"✓ 站点 URL 导出完成 - 总数: %d, 文件: %s (%.2f KB)",
total_count,
str(output_path), # 使用绝对路径
output_path.stat().st_size / 1024
)
return {
'success': True,
'output_file': str(output_path),
'total_count': total_count
}
except FileNotFoundError as e:
logger.error("输出目录不存在: %s", e)
raise
except PermissionError as e:
logger.error("文件写入权限不足: %s", e)
raise
except Exception as e:
logger.exception("导出站点 URL 失败: %s", e)
raise
def _write_default_urls(target_id: int, target_name: str, output_path: Path) -> int:
"""
懒加载模式:根据 Target 类型生成默认 URL
Args:
target_id: 目标 ID
target_name: 目标名称(可选,如果为空则从数据库查询)
output_path: 输出文件路径
Returns:
int: 生成的 URL 数量
"""
# 获取 Target 信息
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return 0
target_name = target.name
target_type = target.type
logger.info("懒加载模式Target 类型=%s, 名称=%s", target_type, target_name)
total_urls = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
if target_type == Target.TargetType.DOMAIN:
# 域名类型:生成 http(s)://domain
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ 域名默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.IP:
# IP 类型:生成 http(s)://ip
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ IP 默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.CIDR:
# CIDR 类型:展开为所有 IP 的 URL
try:
network = ipaddress.ip_network(target_name, strict=False)
for ip in network.hosts(): # 排除网络地址和广播地址
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls += 2
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
# 如果是 /32 或 /128单个 IPhosts() 会为空
if total_urls == 0:
ip = str(network.network_address)
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls = 2
logger.info("✓ CIDR 默认 URL 已写入: %d 个 URL (来自 %s)", total_urls, target_name)
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
return 0
else:
logger.warning("不支持的 Target 类型: %s", target_type)
return 0
return total_urls

View File

@@ -1,206 +0,0 @@
"""
导出扫描目标到 TXT 文件的 Task
根据 Target 类型决定导出内容:
- DOMAIN: 从 Subdomain 表导出子域名
- IP: 直接写入 target.name
- CIDR: 展开 CIDR 范围内的所有 IP
使用流式处理,避免大量数据导致内存溢出
"""
import logging
import ipaddress
from pathlib import Path
from prefect import task
from apps.asset.services.asset.subdomain_service import SubdomainService
from apps.targets.services import TargetService
from apps.targets.models import Target # 仅用于 TargetType 常量
logger = logging.getLogger(__name__)
def _export_domains(target_id: int, target_name: str, output_path: Path, batch_size: int) -> int:
"""
导出域名类型目标的子域名(支持默认值模式)
Args:
target_id: 目标 ID
target_name: 目标名称(域名)
output_path: 输出文件路径
batch_size: 批次大小
Returns:
int: 导出的记录数
默认值模式:
如果没有子域名,自动使用根域名作为默认子域名
"""
subdomain_service = SubdomainService()
domain_iterator = subdomain_service.iter_subdomain_names_by_target(
target_id=target_id,
chunk_size=batch_size
)
total_count = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for domain_name in domain_iterator:
f.write(f"{domain_name}\n")
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个域名...", total_count)
# ==================== 采用默认域名:如果没有子域名,使用根域名 ====================
# 只写入文件供扫描工具使用,不写入数据库
# 数据库只存储扫描发现的真实资产
if total_count == 0:
logger.info("采用默认域名:%s (target_id=%d)", target_name, target_id)
# 只写入文件,不写入数据库
with open(output_path, 'w', encoding='utf-8') as f:
f.write(f"{target_name}\n")
total_count = 1
logger.info("✓ 默认域名已写入文件: %s", target_name)
return total_count
def _export_ip(target_name: str, output_path: Path) -> int:
"""
导出 IP 类型目标
Args:
target_name: IP 地址
output_path: 输出文件路径
Returns:
int: 导出的记录数(始终为 1
"""
with open(output_path, 'w', encoding='utf-8') as f:
f.write(f"{target_name}\n")
return 1
def _export_cidr(target_name: str, output_path: Path) -> int:
"""
导出 CIDR 类型目标,展开为每个 IP
Args:
target_name: CIDR 范围(如 192.168.1.0/24
output_path: 输出文件路径
Returns:
int: 导出的 IP 数量
"""
network = ipaddress.ip_network(target_name, strict=False)
total_count = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for ip in network.hosts(): # 排除网络地址和广播地址
f.write(f"{ip}\n")
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个 IP...", total_count)
# 如果是 /32 或 /128单个 IPhosts() 会为空,需要特殊处理
if total_count == 0:
with open(output_path, 'w', encoding='utf-8') as f:
f.write(f"{network.network_address}\n")
total_count = 1
return total_count
@task(name="export_scan_targets")
def export_scan_targets_task(
target_id: int,
output_file: str,
batch_size: int = 1000
) -> dict:
"""
导出扫描目标到 TXT 文件
根据 Target 类型自动决定导出内容:
- DOMAIN: 从 Subdomain 表导出子域名(流式处理,支持 10万+ 域名)
- IP: 直接写入 target.name单个 IP
- CIDR: 展开 CIDR 范围内的所有可用 IP
Args:
target_id: 目标 ID
output_file: 输出文件路径(绝对路径)
batch_size: 每次读取的批次大小,默认 1000仅对 DOMAIN 类型有效)
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int,
'target_type': str
}
Raises:
ValueError: Target 不存在
IOError: 文件写入失败
"""
try:
# 1. 通过 Service 层获取 Target
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
raise ValueError(f"Target ID {target_id} 不存在")
target_type = target.type
target_name = target.name
logger.info(
"开始导出扫描目标 - Target ID: %d, Name: %s, Type: %s, 输出文件: %s",
target_id, target_name, target_type, output_file
)
# 2. 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 3. 根据类型导出
if target_type == Target.TargetType.DOMAIN:
total_count = _export_domains(target_id, target_name, output_path, batch_size)
type_desc = "域名"
elif target_type == Target.TargetType.IP:
total_count = _export_ip(target_name, output_path)
type_desc = "IP"
elif target_type == Target.TargetType.CIDR:
total_count = _export_cidr(target_name, output_path)
type_desc = "CIDR IP"
else:
raise ValueError(f"不支持的目标类型: {target_type}")
logger.info(
"✓ 扫描目标导出完成 - 类型: %s, 总数: %d, 文件: %s (%.2f KB)",
type_desc,
total_count,
str(output_path),
output_path.stat().st_size / 1024
)
return {
'success': True,
'output_file': str(output_path),
'total_count': total_count,
'target_type': target_type
}
except FileNotFoundError as e:
logger.error("输出目录不存在: %s", e)
raise
except PermissionError as e:
logger.error("文件写入权限不足: %s", e)
raise
except ValueError as e:
logger.error("参数错误: %s", e)
raise
except Exception as e:
logger.exception("导出扫描目标失败: %s", e)
raise

View File

@@ -1,215 +0,0 @@
"""
导出站点URL到文件的Task
直接使用 HostPortMapping 表查询 host+port 组合拼接成URL格式写入文件
默认值模式:
- 如果没有 HostPortMapping 数据,写入默认 URL 到文件(不写入数据库)
- DOMAIN: http(s)://target_name
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
"""
import logging
import ipaddress
from pathlib import Path
from prefect import task
from typing import Optional
from apps.asset.services import HostPortMappingService
from apps.targets.services import TargetService
from apps.targets.models import Target
logger = logging.getLogger(__name__)
@task(name="export_site_urls")
def export_site_urls_task(
target_id: int,
output_file: str,
target_name: Optional[str] = None,
batch_size: int = 1000
) -> dict:
"""
导出目标下的所有站点URL到文件基于 HostPortMapping 表)
功能:
1. 从 HostPortMapping 表查询 target 下所有 host+port 组合
2. 拼接成URL格式标准端口80/443将省略端口号
3. 写入到指定文件中
默认值模式(懒加载):
- 如果没有 HostPortMapping 数据,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://target_name
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
Args:
target_id: 目标ID
output_file: 输出文件路径(绝对路径)
target_name: 目标名称(用于懒加载时写入默认值)
batch_size: 每次处理的批次大小默认1000暂未使用预留
Returns:
dict: {
'success': bool,
'output_file': str,
'total_urls': int,
'association_count': int # 主机端口关联数量
}
Raises:
ValueError: 参数错误
IOError: 文件写入失败
"""
try:
logger.info("开始统计站点URL - Target ID: %d, 输出文件: %s", target_id, output_file)
# 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 直接查询 HostPortMapping 表,按 host 排序
service = HostPortMappingService()
associations = service.iter_host_port_by_target(
target_id=target_id,
batch_size=batch_size,
)
total_urls = 0
association_count = 0
# 流式写入文件
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for assoc in associations:
association_count += 1
host = assoc['host']
port = assoc['port']
# 根据端口号生成URL
# 80 端口:只生成 HTTP URL省略端口号
# 443 端口:只生成 HTTPS URL省略端口号
# 其他端口:生成 HTTP 和 HTTPS 两个URL带端口号
if port == 80:
# HTTP 标准端口,省略端口号
url = f"http://{host}"
f.write(f"{url}\n")
total_urls += 1
elif port == 443:
# HTTPS 标准端口,省略端口号
url = f"https://{host}"
f.write(f"{url}\n")
total_urls += 1
else:
# 非标准端口,生成 HTTP 和 HTTPS 两个URL
http_url = f"http://{host}:{port}"
https_url = f"https://{host}:{port}"
f.write(f"{http_url}\n")
f.write(f"{https_url}\n")
total_urls += 2
# 每处理1000条记录打印一次进度
if association_count % 1000 == 0:
logger.info("已处理 %d 条关联,生成 %d 个URL...", association_count, total_urls)
logger.info(
"✓ 站点URL导出完成 - 关联数: %d, 总URL数: %d, 文件: %s (%.2f KB)",
association_count,
total_urls,
str(output_path),
output_path.stat().st_size / 1024
)
# ==================== 懒加载模式:根据 Target 类型生成默认 URL ====================
if total_urls == 0:
total_urls = _write_default_urls(target_id, target_name, output_path)
return {
'success': True,
'output_file': str(output_path),
'total_urls': total_urls,
'association_count': association_count
}
except FileNotFoundError as e:
logger.error("输出目录不存在: %s", e)
raise
except PermissionError as e:
logger.error("文件写入权限不足: %s", e)
raise
except Exception as e:
logger.exception("导出站点URL失败: %s", e)
raise
def _write_default_urls(target_id: int, target_name: Optional[str], output_path: Path) -> int:
"""
懒加载模式:根据 Target 类型生成默认 URL
Args:
target_id: 目标 ID
target_name: 目标名称(可选,如果为空则从数据库查询)
output_path: 输出文件路径
Returns:
int: 生成的 URL 数量
"""
# 获取 Target 信息
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return 0
target_name = target.name
target_type = target.type
logger.info("懒加载模式Target 类型=%s, 名称=%s", target_type, target_name)
total_urls = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
if target_type == Target.TargetType.DOMAIN:
# 域名类型:生成 http(s)://domain
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ 域名默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.IP:
# IP 类型:生成 http(s)://ip
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ IP 默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.CIDR:
# CIDR 类型:展开为所有 IP 的 URL
try:
network = ipaddress.ip_network(target_name, strict=False)
for ip in network.hosts(): # 排除网络地址和广播地址
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls += 2
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
# 如果是 /32 或 /128单个 IPhosts() 会为空
if total_urls == 0:
ip = str(network.network_address)
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls = 2
logger.info("✓ CIDR 默认 URL 已写入: %d 个 URL (来自 %s)", total_urls, target_name)
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
return 0
else:
logger.warning("不支持的 Target 类型: %s", target_type)
return 0
return total_urls

View File

@@ -1,195 +0,0 @@
"""
合并并去重域名任务
合并 merge + parse + validate 三个步骤,优化性能:
- 单命令实现(LC_ALL=C sort -u)
- C语言级性能,单进程高效
- 无临时文件,零额外开销
- 支持千万级数据处理
性能优势:
- LC_ALL=C 字节序比较(比locale快20-30%)
- 单进程直接处理多文件(无管道开销)
- 内存占用恒定(~50MB for 50万域名)
- 50万域名处理时间:~0.5秒(相比 Python 提升 ~67%)
Note:
- 工具(amass/subfinder)输出已标准化(小写,无空行)
- sort -u 自动处理去重和排序
- 无需额外过滤,性能最优
"""
import logging
import uuid
import subprocess
from pathlib import Path
from datetime import datetime
from prefect import task
from typing import List
logger = logging.getLogger(__name__)
# 注:使用纯系统命令实现,无需 Python 缓冲区配置
# 工具amass/subfinder输出已是小写且标准化
@task(
name='merge_and_deduplicate',
retries=1,
log_prints=True
)
def merge_and_validate_task(
result_files: List[str],
result_dir: str
) -> str:
"""
合并扫描结果并去重(高性能流式处理)
流程:
1. 使用 LC_ALL=C sort -u 直接处理多文件
2. 排序去重一步完成
3. 返回去重后的文件路径
命令:LC_ALL=C sort -u file1 file2 file3 -o output
注:工具输出已标准化(小写,无空行),无需额外处理
Args:
result_files: 结果文件路径列表
result_dir: 结果目录
Returns:
str: 去重后的域名文件路径
Raises:
RuntimeError: 处理失败
Performance:
- 纯系统命令(C语言实现),单进程极简
- LC_ALL=C: 字节序比较
- sort -u: 直接处理多文件(无管道开销)
Design:
- 极简单命令,无冗余处理
- 单进程直接执行(无管道/重定向开销)
- 内存占用仅在 sort 阶段(外部排序,不会 OOM
"""
logger.info("开始合并并去重 %d 个结果文件(系统命令优化)", len(result_files))
result_path = Path(result_dir)
# 验证文件存在性
valid_files = []
for file_path_str in result_files:
file_path = Path(file_path_str)
if file_path.exists():
valid_files.append(str(file_path))
else:
logger.warning("结果文件不存在: %s", file_path)
if not valid_files:
raise RuntimeError("所有结果文件都不存在")
# 生成输出文件路径
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
short_uuid = uuid.uuid4().hex[:4]
merged_file = result_path / f"merged_{timestamp}_{short_uuid}.txt"
try:
# ==================== 使用系统命令一步完成:排序去重 ====================
# LC_ALL=C: 使用字节序比较(比locale快20-30%)
# sort -u: 直接处理多文件,排序去重
# -o: 安全输出(比重定向更可靠)
cmd = f"LC_ALL=C sort -u {' '.join(valid_files)} -o {merged_file}"
logger.debug("执行命令: %s", cmd)
# 按输入文件总行数动态计算超时时间
total_lines = 0
for file_path in valid_files:
try:
line_count_proc = subprocess.run(
["wc", "-l", file_path],
check=True,
capture_output=True,
text=True,
)
total_lines += int(line_count_proc.stdout.strip().split()[0])
except (subprocess.CalledProcessError, ValueError, IndexError):
continue
timeout = 3600
if total_lines > 0:
# 按行数线性计算:每行约 0.1 秒
base_per_line = 0.1
est = int(total_lines * base_per_line)
timeout = max(600, est)
logger.info(
"Subdomain 合并去重 timeout 自动计算: 输入总行数=%d, timeout=%d",
total_lines,
timeout,
)
result = subprocess.run(
cmd,
shell=True,
check=True,
timeout=timeout
)
logger.debug("✓ 合并去重完成")
# ==================== 统计结果 ====================
if not merged_file.exists():
raise RuntimeError("合并文件未被创建")
# 统计行数(使用系统命令提升大文件性能)
try:
line_count_proc = subprocess.run(
["wc", "-l", str(merged_file)],
check=True,
capture_output=True,
text=True
)
unique_count = int(line_count_proc.stdout.strip().split()[0])
except (subprocess.CalledProcessError, ValueError, IndexError) as e:
logger.warning(
"wc -l 统计失败(文件: %s),降级为 Python 逐行统计 - 错误: %s",
merged_file, e
)
unique_count = 0
with open(merged_file, 'r', encoding='utf-8') as file_obj:
for _ in file_obj:
unique_count += 1
if unique_count == 0:
raise RuntimeError("未找到任何有效域名")
file_size = merged_file.stat().st_size
logger.info(
"✓ 合并去重完成 - 去重后: %d 个域名, 文件大小: %.2f KB",
unique_count,
file_size / 1024
)
return str(merged_file)
except subprocess.TimeoutExpired:
error_msg = "合并去重超时(>60分钟请检查数据量或系统资源"
logger.warning(error_msg) # 超时是可预期的
raise RuntimeError(error_msg)
except subprocess.CalledProcessError as e:
error_msg = f"系统命令执行失败: {e.stderr if e.stderr else str(e)}"
logger.warning(error_msg) # 超时是可预期的
raise RuntimeError(error_msg) from e
except IOError as e:
error_msg = f"文件读写失败: {e}"
logger.warning(error_msg) # 超时是可预期的
raise RuntimeError(error_msg) from e
except Exception as e:
error_msg = f"合并去重失败: {e}"
logger.error(error_msg, exc_info=True)
raise

View File

@@ -1,168 +0,0 @@
"""
导出站点 URL 列表任务
从 WebSite 表导出站点 URL 列表到文件(用于 katana 等爬虫工具)
使用流式写入,避免内存溢出
懒加载模式:
- 如果 WebSite 表为空,根据 Target 类型生成默认 URL
- DOMAIN: 写入 http(s)://domain
- IP: 写入 http(s)://ip
- CIDR: 展开为所有 IP
"""
import logging
import ipaddress
from pathlib import Path
from prefect import task
from typing import Optional
from apps.targets.services import TargetService
from apps.targets.models import Target
logger = logging.getLogger(__name__)
@task(
name='export_sites_for_url_fetch',
retries=1,
log_prints=True
)
def export_sites_task(
output_file: str,
target_id: int,
scan_id: int,
target_name: Optional[str] = None,
batch_size: int = 1000
) -> dict:
"""
导出站点 URL 列表到文件(用于 katana 等爬虫工具)
懒加载模式:
- 如果 WebSite 表为空,根据 Target 类型生成默认 URL
- 数据库只存储"真实发现"的资产
Args:
output_file: 输出文件路径
target_id: 目标 ID
scan_id: 扫描 ID
target_name: 目标名称(用于懒加载时写入默认值)
batch_size: 批次大小(内存优化)
Returns:
dict: {
'output_file': str, # 输出文件路径
'asset_count': int, # 资产数量
}
Raises:
ValueError: 参数错误
RuntimeError: 执行失败
"""
try:
logger.info("开始导出站点 URL 列表 - Target ID: %d", target_id)
# 确保输出目录存在
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 从 WebSite 表导出站点 URL
from apps.asset.services import WebSiteService
website_service = WebSiteService()
# 流式写入文件
asset_count = 0
with open(output_path, 'w') as f:
for url in website_service.iter_website_urls_by_target(target_id, batch_size):
f.write(f"{url}\n")
asset_count += 1
if asset_count % batch_size == 0:
f.flush()
# ==================== 懒加载模式:根据 Target 类型生成默认 URL ====================
if asset_count == 0:
asset_count = _write_default_urls(target_id, target_name, output_path)
logger.info("✓ 站点 URL 导出完成 - 文件: %s, 数量: %d", output_file, asset_count)
return {
'output_file': output_file,
'asset_count': asset_count,
}
except Exception as e:
logger.error("导出站点 URL 失败: %s", e, exc_info=True)
raise RuntimeError(f"导出站点 URL 失败: {e}") from e
def _write_default_urls(target_id: int, target_name: Optional[str], output_path: Path) -> int:
"""
懒加载模式:根据 Target 类型生成默认 URL 列表
Args:
target_id: 目标 ID
target_name: 目标名称
output_path: 输出文件路径
Returns:
int: 生成的 URL 数量
"""
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return 0
target_name = target.name
target_type = target.type
logger.info("懒加载模式Target 类型=%s, 名称=%s", target_type, target_name)
total_urls = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
if target_type == Target.TargetType.DOMAIN:
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ 域名默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.IP:
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ IP 默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.CIDR:
try:
network = ipaddress.ip_network(target_name, strict=False)
for ip in network.hosts():
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls += 2
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
# /32 或 /128 特殊处理
if total_urls == 0:
ip = str(network.network_address)
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls = 2
logger.info("✓ CIDR 默认 URL 已写入: %d 个 URL (来自 %s)", total_urls, target_name)
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
return 0
else:
logger.warning("不支持的 Target 类型: %s", target_type)
return 0
return total_urls

View File

@@ -1,164 +0,0 @@
"""导出 Endpoint URL 到文件的 Task
基于 EndpointService.iter_endpoint_urls_by_target 按目标流式导出端点 URL
用于漏洞扫描(如 Dalfox XSS的输入文件生成。
默认值模式:
- 如果没有 Endpoint根据 Target 类型生成默认 URL
- DOMAIN: http(s)://target_name
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
"""
import logging
import ipaddress
from pathlib import Path
from typing import Dict, Optional
from prefect import task
from apps.asset.services import EndpointService
from apps.targets.services import TargetService
from apps.targets.models import Target
logger = logging.getLogger(__name__)
@task(name="export_endpoints")
def export_endpoints_task(
target_id: int,
output_file: str,
batch_size: int = 1000,
target_name: Optional[str] = None,
) -> Dict[str, object]:
"""导出目标下的所有 Endpoint URL 到文本文件。
默认值模式:如果没有 Endpoint根据 Target 类型生成默认 URL
Args:
target_id: 目标 ID
output_file: 输出文件路径(绝对路径)
batch_size: 每次从数据库迭代的批大小
target_name: 目标名称(用于默认值模式)
Returns:
dict: {
"success": bool,
"output_file": str,
"total_count": int,
}
"""
try:
logger.info("开始导出 Endpoint URL - Target ID: %d, 输出文件: %s", target_id, output_file)
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
service = EndpointService()
url_iterator = service.iter_endpoint_urls_by_target(target_id, chunk_size=batch_size)
total_count = 0
with open(output_path, "w", encoding="utf-8", buffering=8192) as f:
for url in url_iterator:
f.write(f"{url}\n")
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个 Endpoint URL...", total_count)
# ==================== 懒加载模式:根据 Target 类型生成默认 URL ====================
if total_count == 0:
total_count = _write_default_urls(target_id, target_name, output_path)
logger.info(
"✓ Endpoint URL 导出完成 - 总数: %d, 文件: %s (%.2f KB)",
total_count,
str(output_path),
output_path.stat().st_size / 1024,
)
return {
"success": True,
"output_file": str(output_path),
"total_count": total_count,
}
except FileNotFoundError as e:
logger.error("输出目录不存在: %s", e)
raise
except PermissionError as e:
logger.error("文件写入权限不足: %s", e)
raise
except Exception as e:
logger.exception("导出 Endpoint URL 失败: %s", e)
raise
def _write_default_urls(target_id: int, target_name: Optional[str], output_path: Path) -> int:
"""
懒加载模式:根据 Target 类型生成默认 URL
Args:
target_id: 目标 ID
target_name: 目标名称(可选,如果为空则从数据库查询)
output_path: 输出文件路径
Returns:
int: 生成的 URL 数量
"""
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return 0
target_name = target.name
target_type = target.type
logger.info("懒加载模式Target 类型=%s, 名称=%s", target_type, target_name)
total_urls = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
if target_type == Target.TargetType.DOMAIN:
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ 域名默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.IP:
f.write(f"http://{target_name}\n")
f.write(f"https://{target_name}\n")
total_urls = 2
logger.info("✓ IP 默认 URL 已写入: http(s)://%s", target_name)
elif target_type == Target.TargetType.CIDR:
try:
network = ipaddress.ip_network(target_name, strict=False)
for ip in network.hosts():
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls += 2
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
# /32 或 /128 特殊处理
if total_urls == 0:
ip = str(network.network_address)
f.write(f"http://{ip}\n")
f.write(f"https://{ip}\n")
total_urls = 2
logger.info("✓ CIDR 默认 URL 已写入: %d 个 URL (来自 %s)", total_urls, target_name)
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
return 0
else:
logger.warning("不支持的 Target 类型: %s", target_type)
return 0
return total_urls

View File

@@ -1,54 +0,0 @@
"""
工作空间相关的 Prefect Tasks
负责扫描工作空间的创建、验证和管理
"""
from pathlib import Path
from prefect import task
import logging
logger = logging.getLogger(__name__)
@task(
name="create_scan_workspace",
description="创建并验证 Scan 工作空间目录",
retries=2,
retry_delay_seconds=5
)
def create_scan_workspace_task(scan_workspace_dir: str) -> Path:
"""
创建并验证 Scan 工作空间目录
Args:
scan_workspace_dir: Scan 工作空间目录路径
Returns:
Path: 创建的 Scan 工作空间路径对象
Raises:
OSError: 目录创建失败或不可写
"""
scan_workspace_path = Path(scan_workspace_dir)
# 创建目录
try:
scan_workspace_path.mkdir(parents=True, exist_ok=True)
logger.info("✓ Scan 工作空间已创建: %s", scan_workspace_path)
except OSError as e:
logger.error("创建 Scan 工作空间失败: %s - %s", scan_workspace_dir, e)
raise
# 验证目录是否可写
test_file = scan_workspace_path / ".test_write"
try:
test_file.touch()
test_file.unlink()
logger.info("✓ Scan 工作空间验证通过(可写): %s", scan_workspace_path)
except OSError as e:
error_msg = f"Scan 工作空间不可写: {scan_workspace_path}"
logger.error(error_msg)
raise OSError(error_msg) from e
return scan_workspace_path

View File

@@ -1,33 +0,0 @@
"""
扫描模块工具包
提供扫描相关的工具函数。
"""
from .directory_cleanup import remove_directory
from .command_builder import build_scan_command
from .command_executor import execute_and_wait, execute_stream
from .wordlist_helpers import ensure_wordlist_local
from .nuclei_helpers import ensure_nuclei_templates_local
from .performance import FlowPerformanceTracker, CommandPerformanceTracker
from . import config_parser
__all__ = [
# 目录清理
'remove_directory',
# 命令构建
'build_scan_command', # 扫描工具命令构建(基于 f-string
# 命令执行
'execute_and_wait', # 等待式执行(文件输出)
'execute_stream', # 流式执行(实时处理)
# 字典文件
'ensure_wordlist_local', # 确保本地字典文件(含 hash 校验)
# Nuclei 模板
'ensure_nuclei_templates_local', # 确保本地模板(含 commit hash 校验)
# 性能监控
'FlowPerformanceTracker', # Flow 性能追踪器(含系统资源采样)
'CommandPerformanceTracker', # 命令性能追踪器
# 配置解析
'config_parser',
]

View File

@@ -1,417 +0,0 @@
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.exceptions import NotFound, APIException
from rest_framework.filters import SearchFilter
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.db.utils import DatabaseError, IntegrityError, OperationalError
import logging
logger = logging.getLogger(__name__)
from ..models import Scan, ScheduledScan
from ..serializers import (
ScanSerializer, ScanHistorySerializer, QuickScanSerializer,
ScheduledScanSerializer, CreateScheduledScanSerializer,
UpdateScheduledScanSerializer, ToggleScheduledScanSerializer
)
from ..services.scan_service import ScanService
from ..services.scheduled_scan_service import ScheduledScanService
from ..repositories import ScheduledScanDTO
from apps.targets.services.target_service import TargetService
from apps.targets.services.organization_service import OrganizationService
from apps.engine.services.engine_service import EngineService
from apps.common.definitions import ScanStatus
from apps.common.pagination import BasePagination
class ScanViewSet(viewsets.ModelViewSet):
"""扫描任务视图集"""
serializer_class = ScanSerializer
pagination_class = BasePagination
filter_backends = [SearchFilter]
search_fields = ['target__name'] # 按目标名称搜索
def get_queryset(self):
"""优化查询集提升API性能
查询优化策略:
- select_related: 预加载 target 和 engine一对一/多对一关系,使用 JOIN
- 移除 prefetch_related: 避免加载大量资产数据到内存
- order_by: 按创建时间降序排列(最新创建的任务排在最前面)
性能优化原理:
- 列表页使用缓存统计字段cached_*_count避免实时 COUNT 查询
- 序列化器:严格验证缓存字段,确保数据一致性
- 分页场景每页只显示10条记录查询高效
- 避免大数据加载:不再预加载所有关联的资产数据
"""
# 只保留必要的 select_related移除所有 prefetch_related
scan_service = ScanService()
queryset = scan_service.get_all_scans(prefetch_relations=True)
return queryset
def get_serializer_class(self):
"""根据不同的 action 返回不同的序列化器
- list action: 使用 ScanHistorySerializer包含 summary 和 progress
- retrieve action: 使用 ScanHistorySerializer包含 summary 和 progress
- 其他 action: 使用标准的 ScanSerializer
"""
if self.action in ['list', 'retrieve']:
return ScanHistorySerializer
return ScanSerializer
def destroy(self, request, *args, **kwargs):
"""
删除单个扫描任务(两阶段删除)
1. 软删除:立即对用户不可见
2. 硬删除:后台异步执行
"""
try:
scan = self.get_object()
scan_service = ScanService()
result = scan_service.delete_scans_two_phase([scan.id])
return Response({
'message': f'已删除扫描任务: Scan #{scan.id}',
'scanId': scan.id,
'deletedCount': result['soft_deleted_count'],
'deletedScans': result['scan_names']
}, status=status.HTTP_200_OK)
except Scan.DoesNotExist:
raise NotFound('扫描任务不存在')
except ValueError as e:
raise NotFound(str(e))
except Exception as e:
logger.exception("删除扫描任务时发生错误")
raise APIException('服务器错误,请稍后重试')
@action(detail=False, methods=['post'])
def quick(self, request):
"""
快速扫描接口
功能:
1. 接收目标列表和引擎配置
2. 自动解析输入(支持 URL、域名、IP、CIDR
3. 批量创建 Target、Website、Endpoint 资产
4. 立即发起批量扫描
请求参数:
{
"targets": [{"name": "example.com"}, {"name": "https://example.com/api"}],
"engine_id": 1
}
支持的输入格式:
- 域名: example.com
- IP: 192.168.1.1
- CIDR: 10.0.0.0/8
- URL: https://example.com/api/v1
"""
from ..services.quick_scan_service import QuickScanService
serializer = QuickScanSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
targets_data = serializer.validated_data['targets']
engine_id = serializer.validated_data.get('engine_id')
try:
# 提取输入字符串列表
inputs = [t['name'] for t in targets_data]
# 1. 使用 QuickScanService 解析输入并创建资产
quick_scan_service = QuickScanService()
result = quick_scan_service.process_quick_scan(inputs, engine_id)
targets = result['targets']
if not targets:
return Response({
'error': '没有有效的目标可供扫描',
'errors': result.get('errors', [])
}, status=status.HTTP_400_BAD_REQUEST)
# 2. 获取扫描引擎
engine_service = EngineService()
engine = engine_service.get_engine(engine_id)
if not engine:
raise ValidationError(f'扫描引擎 ID {engine_id} 不存在')
# 3. 批量发起扫描
scan_service = ScanService()
created_scans = scan_service.create_scans(
targets=targets,
engine=engine
)
# 序列化返回结果
scan_serializer = ScanSerializer(created_scans, many=True)
return Response({
'message': f'快速扫描已启动:{len(created_scans)} 个任务',
'target_stats': result['target_stats'],
'asset_stats': result['asset_stats'],
'errors': result.get('errors', []),
'scans': scan_serializer.data
}, status=status.HTTP_201_CREATED)
except ValidationError as e:
return Response({'error': str(e)}, status=status.HTTP_400_BAD_REQUEST)
except Exception as e:
logger.exception("快速扫描启动失败")
return Response(
{'error': '服务器内部错误,请稍后重试'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@action(detail=False, methods=['post'])
def initiate(self, request):
"""
发起扫描任务
请求参数:
- organization_id: 组织ID (int, 可选)
- target_id: 目标ID (int, 可选)
- engine_id: 扫描引擎ID (int, 必填)
注意: organization_id 和 target_id 二选一
返回:
- 扫描任务详情(单个或多个)
"""
# 获取请求数据
organization_id = request.data.get('organization_id')
target_id = request.data.get('target_id')
engine_id = request.data.get('engine_id')
try:
# 步骤1准备扫描所需的数据验证参数、查询资源、返回目标列表和引擎
scan_service = ScanService()
targets, engine = scan_service.prepare_initiate_scan(
organization_id=organization_id,
target_id=target_id,
engine_id=engine_id
)
# 步骤2批量创建扫描记录并分发扫描任务
created_scans = scan_service.create_scans(
targets=targets,
engine=engine
)
# 序列化返回结果
scan_serializer = ScanSerializer(created_scans, many=True)
return Response(
{
'message': f'已成功发起 {len(created_scans)} 个扫描任务',
'count': len(created_scans),
'scans': scan_serializer.data
},
status=status.HTTP_201_CREATED
)
except ObjectDoesNotExist as e:
# 资源不存在错误(由 service 层抛出)
error_msg = str(e)
return Response(
{'error': error_msg},
status=status.HTTP_404_NOT_FOUND
)
except ValidationError as e:
# 参数验证错误(由 service 层抛出)
return Response(
{'error': str(e)},
status=status.HTTP_400_BAD_REQUEST
)
except (DatabaseError, IntegrityError, OperationalError):
# 数据库错误
return Response(
{'error': '数据库错误,请稍后重试'},
status=status.HTTP_503_SERVICE_UNAVAILABLE
)
# 所有快照相关的 action 和 export 已迁移到 asset/views.py 中的快照 ViewSet
# GET /api/scans/{id}/subdomains/ -> SubdomainSnapshotViewSet
# GET /api/scans/{id}/subdomains/export/ -> SubdomainSnapshotViewSet.export
# GET /api/scans/{id}/websites/ -> WebsiteSnapshotViewSet
# GET /api/scans/{id}/websites/export/ -> WebsiteSnapshotViewSet.export
# GET /api/scans/{id}/directories/ -> DirectorySnapshotViewSet
# GET /api/scans/{id}/directories/export/ -> DirectorySnapshotViewSet.export
# GET /api/scans/{id}/endpoints/ -> EndpointSnapshotViewSet
# GET /api/scans/{id}/endpoints/export/ -> EndpointSnapshotViewSet.export
# GET /api/scans/{id}/ip-addresses/ -> HostPortMappingSnapshotViewSet
# GET /api/scans/{id}/ip-addresses/export/ -> HostPortMappingSnapshotViewSet.export
# GET /api/scans/{id}/vulnerabilities/ -> VulnerabilitySnapshotViewSet
@action(detail=False, methods=['post', 'delete'], url_path='bulk-delete')
def bulk_delete(self, request):
"""
批量删除扫描记录
请求参数:
- ids: 扫描ID列表 (list[int], 必填)
示例请求:
POST /api/scans/bulk-delete/
{
"ids": [1, 2, 3]
}
返回:
- message: 成功消息
- deletedCount: 实际删除的记录数
注意:
- 使用级联删除,会同时删除关联的子域名、端点等数据
- 只删除存在的记录不存在的ID会被忽略
"""
ids = request.data.get('ids', [])
# 参数验证
if not ids:
return Response(
{'error': '缺少必填参数: ids'},
status=status.HTTP_400_BAD_REQUEST
)
if not isinstance(ids, list):
return Response(
{'error': 'ids 必须是数组'},
status=status.HTTP_400_BAD_REQUEST
)
if not all(isinstance(i, int) for i in ids):
return Response(
{'error': 'ids 数组中的所有元素必须是整数'},
status=status.HTTP_400_BAD_REQUEST
)
try:
# 使用 Service 层批量删除(两阶段删除)
scan_service = ScanService()
result = scan_service.delete_scans_two_phase(ids)
return Response({
'message': f"已删除 {result['soft_deleted_count']} 个扫描任务",
'deletedCount': result['soft_deleted_count'],
'deletedScans': result['scan_names']
}, status=status.HTTP_200_OK)
except ValueError as e:
# 未找到记录
raise NotFound(str(e))
except Exception as e:
logger.exception("批量删除扫描任务时发生错误")
raise APIException('服务器错误,请稍后重试')
@action(detail=False, methods=['get'])
def statistics(self, request):
"""
获取扫描统计数据
返回扫描任务的汇总统计信息,用于仪表板和扫描历史页面。
使用缓存字段聚合查询,性能优异。
返回:
- total: 总扫描次数
- running: 运行中的扫描数量
- completed: 已完成的扫描数量
- failed: 失败的扫描数量
- totalVulns: 总共发现的漏洞数量
- totalSubdomains: 总共发现的子域名数量
- totalEndpoints: 总共发现的端点数量
- totalAssets: 总资产数
"""
try:
# 使用 Service 层获取统计数据
scan_service = ScanService()
stats = scan_service.get_statistics()
return Response({
'total': stats['total'],
'running': stats['running'],
'completed': stats['completed'],
'failed': stats['failed'],
'totalVulns': stats['total_vulns'],
'totalSubdomains': stats['total_subdomains'],
'totalEndpoints': stats['total_endpoints'],
'totalWebsites': stats['total_websites'],
'totalAssets': stats['total_assets'],
})
except (DatabaseError, OperationalError):
return Response(
{'error': '数据库错误,请稍后重试'},
status=status.HTTP_503_SERVICE_UNAVAILABLE
)
@action(detail=True, methods=['post'])
def stop(self, request, pk=None): # pylint: disable=unused-argument
"""
停止扫描任务
URL: POST /api/scans/{id}/stop/
功能:
- 终止正在运行或初始化的扫描任务
- 更新扫描状态为 CANCELLED
状态限制:
- 只能停止 RUNNING 或 INITIATED 状态的扫描
- 已完成、失败或取消的扫描无法停止
返回:
- message: 成功消息
- revokedTaskCount: 取消的 Flow Run 数量
"""
try:
# 使用 Service 层处理停止逻辑
scan_service = ScanService()
success, revoked_count = scan_service.stop_scan(scan_id=pk)
if not success:
# 检查是否是状态不允许的问题
scan = scan_service.get_scan(scan_id=pk, prefetch_relations=False)
if scan and scan.status not in [ScanStatus.RUNNING, ScanStatus.INITIATED]:
return Response(
{
'error': f'无法停止扫描:当前状态为 {ScanStatus(scan.status).label}',
'detail': '只能停止运行中或初始化状态的扫描'
},
status=status.HTTP_400_BAD_REQUEST
)
# 其他失败原因
return Response(
{'error': '停止扫描失败'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
return Response(
{
'message': f'扫描已停止,已撤销 {revoked_count} 个任务',
'revokedTaskCount': revoked_count
},
status=status.HTTP_200_OK
)
except ObjectDoesNotExist:
return Response(
{'error': f'扫描 ID {pk} 不存在'},
status=status.HTTP_404_NOT_FOUND
)
except (DatabaseError, IntegrityError, OperationalError):
return Response(
{'error': '数据库错误,请稍后重试'},
status=status.HTTP_503_SERVICE_UNAVAILABLE
)

View File

@@ -1,27 +0,0 @@
[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "config.settings"
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
testpaths = ["apps"]
addopts = "-v --reuse-db"
[tool.pylint]
django-settings-module = "config.settings"
load-plugins = "pylint_django"
[tool.pylint.messages_control]
disable = [
"missing-docstring",
"invalid-name",
"too-few-public-methods",
"no-member",
"import-error",
"no-name-in-module",
]
[tool.pylint.format]
max-line-length = 120
[tool.pylint.basic]
good-names = ["i", "j", "k", "ex", "Run", "_", "id", "pk", "ip", "url", "db", "qs"]

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

View File

@@ -4,27 +4,27 @@ import { VulnSeverityChart } from "@/components/dashboard/vuln-severity-chart"
import { DashboardDataTable } from "@/components/dashboard/dashboard-data-table"
/**
*
* ,
*
* Dashboard page component
* This is the main dashboard page of the application, containing cards, charts and data tables
* Layout structure has been moved to the root layout component
*/
export default function Page() {
return (
// 内容区域,包含卡片、图表和数据表格
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 顶部统计卡片 */}
// Content area containing cards, charts and data tables
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6 animate-dashboard-fade-in">
{/* Top statistics cards */}
<DashboardStatCards />
{/* 图表区域 - 趋势图 + 漏洞分布 */}
{/* Chart area - Trend chart + Vulnerability distribution */}
<div className="grid gap-4 px-4 lg:px-6 @xl/main:grid-cols-2">
{/* 资产趋势折线图 */}
{/* Asset trend line chart */}
<AssetTrendChart />
{/* 漏洞严重程度分布 */}
{/* Vulnerability severity distribution */}
<VulnSeverityChart />
</div>
{/* 漏洞 / 扫描历史 Tab */}
{/* Vulnerabilities / Scan history tab */}
<div className="px-4 lg:px-6">
<DashboardDataTable />
</div>

View File

@@ -0,0 +1,139 @@
import type React from "react"
import type { Metadata } from "next"
import { NextIntlClientProvider } from 'next-intl'
import { getMessages, setRequestLocale, getTranslations } from 'next-intl/server'
import { notFound } from 'next/navigation'
import { locales, localeHtmlLang, type Locale } from '@/i18n/config'
// Import global style files
import "../globals.css"
// Import Noto Sans SC local font
import "@fontsource/noto-sans-sc/400.css"
import "@fontsource/noto-sans-sc/500.css"
import "@fontsource/noto-sans-sc/700.css"
// Import color themes
import "@/styles/themes/bubblegum.css"
import "@/styles/themes/quantum-rose.css"
import "@/styles/themes/clean-slate.css"
import "@/styles/themes/cosmic-night.css"
import "@/styles/themes/vercel.css"
import "@/styles/themes/vercel-dark.css"
import "@/styles/themes/violet-bloom.css"
import "@/styles/themes/cyberpunk-1.css"
import { Suspense } from "react"
import Script from "next/script"
import { QueryProvider } from "@/components/providers/query-provider"
import { ThemeProvider } from "@/components/providers/theme-provider"
import { UiI18nProvider } from "@/components/providers/ui-i18n-provider"
// Import common layout components
import { RoutePrefetch } from "@/components/route-prefetch"
import { RouteProgress } from "@/components/route-progress"
import { AuthLayout } from "@/components/auth/auth-layout"
// Dynamically generate metadata
export async function generateMetadata({ params }: { params: Promise<{ locale: string }> }): Promise<Metadata> {
const { locale } = await params
const t = await getTranslations({ locale, namespace: 'metadata' })
return {
title: t('title'),
description: t('description'),
keywords: t('keywords').split(',').map(k => k.trim()),
generator: "Orbit ASM Platform",
authors: [{ name: "yyhuni" }],
icons: {
icon: [{ url: "/icon.svg", type: "image/svg+xml" }],
},
openGraph: {
title: t('ogTitle'),
description: t('ogDescription'),
type: "website",
locale: locale === 'zh' ? 'zh_CN' : 'en_US',
},
robots: {
index: true,
follow: true,
},
}
}
// Use Noto Sans SC + system font fallback, fully loaded locally
const fontConfig = {
className: "font-sans",
style: {
fontFamily: "'Noto Sans SC', system-ui, -apple-system, PingFang SC, Hiragino Sans GB, Microsoft YaHei, sans-serif"
}
}
// Generate static parameters, support all languages
export function generateStaticParams() {
return locales.map((locale) => ({ locale }))
}
interface Props {
children: React.ReactNode
params: Promise<{ locale: string }>
}
/**
* Language layout component
* Wraps all pages, provides internationalization context
*/
export default async function LocaleLayout({
children,
params,
}: Props) {
const { locale } = await params
// Validate locale validity
if (!locales.includes(locale as Locale)) {
notFound()
}
// Enable static rendering
setRequestLocale(locale)
// Load translation messages
const messages = await getMessages()
return (
<html lang={localeHtmlLang[locale as Locale]} suppressHydrationWarning>
<body className={fontConfig.className} style={fontConfig.style}>
{/* Load external scripts */}
<Script
src="https://tweakcn.com/live-preview.min.js"
strategy="beforeInteractive"
crossOrigin="anonymous"
/>
{/* Route loading progress bar */}
<Suspense fallback={null}>
<RouteProgress />
</Suspense>
{/* ThemeProvider provides theme switching functionality */}
<ThemeProvider
attribute="class"
defaultTheme="dark"
enableSystem
disableTransitionOnChange
>
{/* NextIntlClientProvider provides internationalization context */}
<NextIntlClientProvider messages={messages}>
{/* QueryProvider provides React Query functionality */}
<QueryProvider>
{/* UiI18nProvider provides UI component translations */}
<UiI18nProvider>
{/* Route prefetch */}
<RoutePrefetch />
{/* AuthLayout handles authentication and sidebar display */}
<AuthLayout>
{children}
</AuthLayout>
</UiI18nProvider>
</QueryProvider>
</NextIntlClientProvider>
</ThemeProvider>
</body>
</html>
)
}

View File

@@ -0,0 +1,28 @@
import type { Metadata } from "next"
import { getTranslations } from "next-intl/server"
type Props = {
params: Promise<{ locale: string }>
}
export async function generateMetadata({ params }: Props): Promise<Metadata> {
const { locale } = await params
const t = await getTranslations({ locale, namespace: "auth" })
return {
title: t("pageTitle"),
description: t("pageDescription"),
}
}
/**
* Login page layout
* Does not include sidebar and header
*/
export default function LoginLayout({
children,
}: {
children: React.ReactNode
}) {
return children
}

View File

@@ -0,0 +1,228 @@
"use client"
import React from "react"
import { useRouter } from "next/navigation"
import { useTranslations } from "next-intl"
import { useQueryClient } from "@tanstack/react-query"
import dynamic from "next/dynamic"
import { LoginBootScreen } from "@/components/auth/login-boot-screen"
import { TerminalLogin } from "@/components/ui/terminal-login"
import { useLogin, useAuth } from "@/hooks/use-auth"
import { vulnerabilityKeys } from "@/hooks/use-vulnerabilities"
import { useRoutePrefetch } from "@/hooks/use-route-prefetch"
import { getAssetStatistics, getStatisticsHistory } from "@/services/dashboard.service"
import { getScans } from "@/services/scan.service"
import { VulnerabilityService } from "@/services/vulnerability.service"
// Dynamic import to avoid SSR issues with WebGL
const PixelBlast = dynamic(() => import("@/components/PixelBlast"), { ssr: false })
const BOOT_SPLASH_MS = 600
const BOOT_FADE_MS = 200
type BootOverlayPhase = "entering" | "visible" | "leaving" | "hidden"
export default function LoginPage() {
// Preload all page components on login page
useRoutePrefetch()
const router = useRouter()
const queryClient = useQueryClient()
const { data: auth, isLoading: authLoading } = useAuth()
const { mutateAsync: login, isPending } = useLogin()
const t = useTranslations("auth.terminal")
const loginStartedRef = React.useRef(false)
const [loginReady, setLoginReady] = React.useState(false)
const [pixelFirstFrame, setPixelFirstFrame] = React.useState(false)
const handlePixelFirstFrame = React.useCallback(() => {
setPixelFirstFrame(true)
}, [])
// 提取预加载逻辑为可复用函数
const prefetchDashboardData = React.useCallback(async () => {
const scansParams = { page: 1, pageSize: 10 }
const vulnsParams = { page: 1, pageSize: 10 }
return Promise.allSettled([
queryClient.prefetchQuery({
queryKey: ["asset", "statistics"],
queryFn: getAssetStatistics,
}),
queryClient.prefetchQuery({
queryKey: ["asset", "statistics", "history", 7],
queryFn: () => getStatisticsHistory(7),
}),
queryClient.prefetchQuery({
queryKey: ["scans", scansParams],
queryFn: () => getScans(scansParams),
}),
queryClient.prefetchQuery({
queryKey: vulnerabilityKeys.list(vulnsParams),
queryFn: () => VulnerabilityService.getAllVulnerabilities(vulnsParams),
}),
])
}, [queryClient])
// Always show a short splash on entering the login page.
const [bootMinDone, setBootMinDone] = React.useState(false)
const [bootPhase, setBootPhase] = React.useState<BootOverlayPhase>("entering")
React.useEffect(() => {
setBootMinDone(false)
setBootPhase("entering")
const bootTimer = setTimeout(() => setBootMinDone(true), BOOT_SPLASH_MS)
const raf = requestAnimationFrame(() => setBootPhase("visible"))
return () => {
clearTimeout(bootTimer)
cancelAnimationFrame(raf)
}
}, [])
// Start hiding the splash after the minimum time AND auth check completes.
// Note: don't schedule the fade-out timer in the same effect where we set `bootPhase`,
// otherwise the effect cleanup will cancel the timer when `bootPhase` changes.
React.useEffect(() => {
if (bootPhase !== "visible") return
if (!bootMinDone) return
if (authLoading) return
if (!pixelFirstFrame) return
setBootPhase("leaving")
}, [authLoading, bootMinDone, bootPhase, pixelFirstFrame])
React.useEffect(() => {
if (bootPhase !== "leaving") return
const timer = setTimeout(() => setBootPhase("hidden"), BOOT_FADE_MS)
return () => clearTimeout(timer)
}, [bootPhase])
// Memoize translations object to avoid recreating on every render
const translations = React.useMemo(() => ({
title: t("title"),
subtitle: t("subtitle"),
usernamePrompt: t("usernamePrompt"),
passwordPrompt: t("passwordPrompt"),
authenticating: t("authenticating"),
processing: t("processing"),
accessGranted: t("accessGranted"),
welcomeMessage: t("welcomeMessage"),
authFailed: t("authFailed"),
invalidCredentials: t("invalidCredentials"),
shortcuts: t("shortcuts"),
submit: t("submit"),
cancel: t("cancel"),
clear: t("clear"),
startEnd: t("startEnd"),
}), [t])
// If already logged in, warm up the dashboard, then redirect.
React.useEffect(() => {
if (authLoading) return
if (!auth?.authenticated) return
if (loginStartedRef.current) return
let cancelled = false
void (async () => {
await prefetchDashboardData()
if (cancelled) return
router.replace("/dashboard/")
})()
return () => {
cancelled = true
}
}, [auth?.authenticated, authLoading, prefetchDashboardData, router])
React.useEffect(() => {
if (!loginReady) return
router.replace("/dashboard/")
}, [loginReady, router])
const handleLogin = async (username: string, password: string) => {
loginStartedRef.current = true
setLoginReady(false)
// 并行执行独立操作:登录验证 + 预加载 dashboard bundle
const [loginRes] = await Promise.all([
login({ username, password }),
router.prefetch("/dashboard/"),
])
// 预加载 dashboard 数据
await prefetchDashboardData()
// Prime auth cache so AuthLayout doesn't flash a full-screen loading state.
queryClient.setQueryData(["auth", "me"], {
authenticated: true,
user: loginRes.user,
})
setLoginReady(true)
}
const loginVisible = bootPhase === "leaving" || bootPhase === "hidden"
return (
<div className="relative flex min-h-svh flex-col bg-black">
<div className={`fixed inset-0 z-0 transition-opacity duration-300 ${loginVisible ? "opacity-100" : "opacity-0"}`}>
<PixelBlast
onFirstFrame={handlePixelFirstFrame}
className=""
style={{}}
pixelSize={6.5}
patternScale={4.5}
color="#FF10F0"
speed={0.35}
enableRipples={false}
/>
</div>
{/* Fingerprint identifier - for FOFA/Shodan and other search engines to identify */}
<meta name="generator" content="Orbit ASM Platform" />
{/* Main content area */}
<div
className={`relative z-10 flex-1 flex items-center justify-center p-6 transition-[opacity,transform] duration-300 ${
loginVisible ? "opacity-100 translate-y-0" : "opacity-0 translate-y-2"
}`}
>
<TerminalLogin
onLogin={handleLogin}
authDone={loginReady}
isPending={isPending}
translations={translations}
/>
</div>
{/* Version number - fixed at the bottom of the page */}
<div
className={`relative z-10 flex-shrink-0 text-center py-4 transition-opacity duration-300 ${
loginVisible ? "opacity-100" : "opacity-0"
}`}
>
<p className="text-xs text-muted-foreground">
{process.env.NEXT_PUBLIC_VERSION || "dev"}
</p>
</div>
{/* Full-page splash overlay */}
{bootPhase !== "hidden" && (
<div
className={`fixed inset-0 z-50 transition-opacity ease-out ${
bootPhase === "visible" ? "opacity-100" : "opacity-0 pointer-events-none"
}`}
style={{ transitionDuration: `${BOOT_FADE_MS}ms` }}
>
<LoginBootScreen />
</div>
)}
</div>
)
}

View File

@@ -4,8 +4,8 @@ import React from "react"
import { OrganizationDetailView } from "@/components/organization/organization-detail-view"
/**
*
*
* Organization detail page
* Displays organization statistics and asset list
*/
export default function OrganizationDetailPage({
params,

View File

@@ -1,30 +1,35 @@
// 导入组织管理组件
"use client"
// Import organization management component
import { OrganizationList } from "@/components/organization/organization-list"
// 导入图标
// Import icons
import { Building2 } from "lucide-react"
import { useTranslations } from "next-intl"
/**
*
*
* Organization management page
* Sub-page under asset management that displays organization list and related operations
*/
export default function OrganizationPage() {
const t = useTranslations("pages.organization")
return (
// 内容区域,包含组织管理功能
// Content area containing organization management features
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 页面头部 */}
{/* Page header */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div>
<h2 className="text-2xl font-bold tracking-tight flex items-center gap-2">
<Building2 />
{t("title")}
</h2>
<p className="text-muted-foreground">
{t("description")}
</p>
</div>
</div>
{/* 组织列表组件 */}
{/* Organization list component */}
<div className="px-4 lg:px-6">
<OrganizationList />
</div>

View File

@@ -0,0 +1,7 @@
import { redirect } from 'next/navigation';
import { defaultLocale } from '@/i18n/config';
export default function Home() {
// Redirect directly to dashboard page (with language prefix)
redirect(`/${defaultLocale}/dashboard/`);
}

View File

@@ -4,6 +4,7 @@ import React, { useState, useMemo } from "react"
import { Settings, Search, Pencil, Trash2, Check, X, Plus } from "lucide-react"
import * as yaml from "js-yaml"
import Editor from "@monaco-editor/react"
import { useTranslations } from "next-intl"
import { useColorTheme } from "@/hooks/use-color-theme"
import { Button } from "@/components/ui/button"
import { Input } from "@/components/ui/input"
@@ -26,25 +27,29 @@ import { cn } from "@/lib/utils"
import type { ScanEngine } from "@/types/engine.types"
import { MasterDetailSkeleton } from "@/components/ui/master-detail-skeleton"
/** 功能配置项定义 - 与 YAML 配置结构对应 */
/** Feature configuration item definition - corresponds to YAML configuration structure */
const FEATURE_LIST = [
{ key: "subdomain_discovery", label: "子域名发现" },
{ key: "port_scan", label: "端口扫描" },
{ key: "site_scan", label: "站点扫描" },
{ key: "directory_scan", label: "目录扫描" },
{ key: "url_fetch", label: "URL 抓取" },
{ key: "vuln_scan", label: "漏洞扫描" },
{ key: "subdomain_discovery" },
{ key: "port_scan" },
{ key: "site_scan" },
{ key: "fingerprint_detect" },
{ key: "directory_scan" },
{ key: "screenshot" },
{ key: "url_fetch" },
{ key: "vuln_scan" },
] as const
type FeatureKey = typeof FEATURE_LIST[number]["key"]
/** 解析引擎配置获取启用的功能 */
/** Parse engine configuration to get enabled features */
function parseEngineFeatures(engine: ScanEngine): Record<FeatureKey, boolean> {
const defaultFeatures: Record<FeatureKey, boolean> = {
subdomain_discovery: false,
port_scan: false,
site_scan: false,
fingerprint_detect: false,
directory_scan: false,
screenshot: false,
url_fetch: false,
vuln_scan: false,
}
@@ -59,7 +64,9 @@ function parseEngineFeatures(engine: ScanEngine): Record<FeatureKey, boolean> {
subdomain_discovery: !!config.subdomain_discovery,
port_scan: !!config.port_scan,
site_scan: !!config.site_scan,
fingerprint_detect: !!config.fingerprint_detect,
directory_scan: !!config.directory_scan,
screenshot: !!config.screenshot,
url_fetch: !!config.url_fetch,
vuln_scan: !!config.vuln_scan,
}
@@ -68,14 +75,14 @@ function parseEngineFeatures(engine: ScanEngine): Record<FeatureKey, boolean> {
}
}
/** 计算启用的功能数量 */
/** Calculate the number of enabled features */
function countEnabledFeatures(engine: ScanEngine) {
const features = parseEngineFeatures(engine)
return Object.values(features).filter(Boolean).length
}
/**
*
* Scan engine page
*/
export default function ScanEnginePage() {
const [selectedId, setSelectedId] = useState<number | null>(null)
@@ -87,6 +94,12 @@ export default function ScanEnginePage() {
const [engineToDelete, setEngineToDelete] = useState<ScanEngine | null>(null)
const { currentTheme } = useColorTheme()
// Internationalization
const tCommon = useTranslations("common")
const tConfirm = useTranslations("common.confirm")
const tNav = useTranslations("navigation")
const tEngine = useTranslations("scan.engine")
// API Hooks
const { data: engines = [], isLoading } = useEngines()
@@ -94,20 +107,20 @@ export default function ScanEnginePage() {
const updateEngineMutation = useUpdateEngine()
const deleteEngineMutation = useDeleteEngine()
// 过滤引擎列表
// Filter engine list
const filteredEngines = useMemo(() => {
if (!searchQuery.trim()) return engines
const query = searchQuery.toLowerCase()
return engines.filter((e) => e.name.toLowerCase().includes(query))
}, [engines, searchQuery])
// 选中的引擎
// Selected engine
const selectedEngine = useMemo(() => {
if (!selectedId) return null
return engines.find((e) => e.id === selectedId) || null
}, [selectedId, engines])
// 选中引擎的功能状态
// Selected engine's feature status
const selectedFeatures = useMemo(() => {
if (!selectedEngine) return null
return parseEngineFeatures(selectedEngine)
@@ -150,21 +163,21 @@ export default function ScanEnginePage() {
})
}
// 加载状态
// Loading state
if (isLoading) {
return <MasterDetailSkeleton title="扫描引擎" listItemCount={4} />
return <MasterDetailSkeleton title={tNav("scanEngine")} listItemCount={4} />
}
return (
<div className="flex flex-col h-full">
{/* 顶部:标题 + 搜索 + 新建按钮 */}
{/* Top: Title + Search + Create button */}
<div className="flex items-center justify-between gap-4 px-4 py-4 lg:px-6">
<h1 className="text-2xl font-bold shrink-0"></h1>
<h1 className="text-2xl font-bold shrink-0">{tNav("scanEngine")}</h1>
<div className="flex items-center gap-2 flex-1 max-w-md">
<div className="relative flex-1">
<Search className="absolute left-2.5 top-1/2 h-4 w-4 -translate-y-1/2 text-muted-foreground" />
<Input
placeholder="搜索引擎..."
placeholder={tEngine("searchPlaceholder")}
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
className="pl-8"
@@ -173,27 +186,27 @@ export default function ScanEnginePage() {
</div>
<Button onClick={() => setIsCreateDialogOpen(true)}>
<Plus className="h-4 w-4 mr-1" />
{tEngine("createEngine")}
</Button>
</div>
<Separator />
{/* 主体:左侧列表 + 右侧详情 */}
{/* Main: Left list + Right details */}
<div className="flex flex-1 min-h-0">
{/* 左侧:引擎列表 */}
{/* Left: Engine list */}
<div className="w-72 lg:w-80 border-r flex flex-col">
<div className="px-4 py-3 border-b">
<h2 className="text-sm font-medium text-muted-foreground">
({filteredEngines.length})
{tEngine("engineList")} ({filteredEngines.length})
</h2>
</div>
<ScrollArea className="flex-1">
{isLoading ? (
<div className="p-4 text-sm text-muted-foreground">...</div>
<div className="p-4 text-sm text-muted-foreground">{tCommon("loading")}</div>
) : filteredEngines.length === 0 ? (
<div className="p-4 text-sm text-muted-foreground">
{searchQuery ? "未找到匹配的引擎" : "暂无引擎,请先新建"}
{searchQuery ? tEngine("noMatchingEngine") : tEngine("noEngines")}
</div>
) : (
<div className="p-2">
@@ -212,7 +225,7 @@ export default function ScanEnginePage() {
{engine.name}
</div>
<div className="text-xs text-muted-foreground mt-0.5">
{countEnabledFeatures(engine)}
{tEngine("featuresEnabled", { count: countEnabledFeatures(engine) })}
</div>
</button>
))}
@@ -221,11 +234,11 @@ export default function ScanEnginePage() {
</ScrollArea>
</div>
{/* 右侧:引擎详情 */}
{/* Right: Engine details */}
<div className="flex-1 flex flex-col min-w-0">
{selectedEngine && selectedFeatures ? (
<>
{/* 详情头部 */}
{/* Details header */}
<div className="px-6 py-4 border-b">
<div className="flex items-start gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-primary/10 shrink-0">
@@ -236,20 +249,20 @@ export default function ScanEnginePage() {
{selectedEngine.name}
</h2>
<p className="text-sm text-muted-foreground mt-0.5">
{new Date(selectedEngine.updatedAt).toLocaleString("zh-CN")}
{tEngine("updatedAt")} {new Date(selectedEngine.updatedAt).toLocaleString()}
</p>
</div>
<Badge variant="outline">
{countEnabledFeatures(selectedEngine)}
{tEngine("featuresCount", { count: countEnabledFeatures(selectedEngine) })}
</Badge>
</div>
</div>
{/* 详情内容 */}
{/* Details content */}
<div className="flex-1 flex flex-col min-h-0 p-6 gap-6">
{/* 功能状态 */}
{/* Feature status */}
<div className="shrink-0">
<h3 className="text-sm font-medium mb-3"></h3>
<h3 className="text-sm font-medium mb-3">{tEngine("enabledFeatures")}</h3>
<div className="rounded-lg border">
<div className="grid grid-cols-3 gap-px bg-muted">
{FEATURE_LIST.map((feature) => {
@@ -267,7 +280,7 @@ export default function ScanEnginePage() {
) : (
<X className="h-4 w-4 text-muted-foreground/50 shrink-0" />
)}
<span className="text-sm truncate">{feature.label}</span>
<span className="text-sm truncate">{tEngine(`features.${feature.key}`)}</span>
</div>
)
})}
@@ -275,10 +288,10 @@ export default function ScanEnginePage() {
</div>
</div>
{/* 配置预览 */}
{/* Configuration preview */}
{selectedEngine.configuration && (
<div className="flex-1 flex flex-col min-h-0">
<h3 className="text-sm font-medium mb-3 shrink-0"></h3>
<h3 className="text-sm font-medium mb-3 shrink-0">{tEngine("configPreview")}</h3>
<div className="flex-1 rounded-lg border overflow-hidden min-h-0">
<Editor
height="100%"
@@ -302,7 +315,7 @@ export default function ScanEnginePage() {
)}
</div>
{/* 操作按钮 */}
{/* Action buttons */}
<div className="px-6 py-4 border-t flex items-center gap-2">
<Button
variant="outline"
@@ -310,7 +323,7 @@ export default function ScanEnginePage() {
onClick={() => handleEdit(selectedEngine)}
>
<Pencil className="h-4 w-4 mr-1.5" />
{tEngine("editConfig")}
</Button>
<div className="flex-1" />
<Button
@@ -321,23 +334,23 @@ export default function ScanEnginePage() {
disabled={deleteEngineMutation.isPending}
>
<Trash2 className="h-4 w-4 mr-1.5" />
{tCommon("actions.delete")}
</Button>
</div>
</>
) : (
// 未选中状态
// Unselected state
<div className="flex-1 flex items-center justify-center">
<div className="text-center text-muted-foreground">
<Settings className="h-12 w-12 mx-auto mb-3 opacity-50" />
<p className="text-sm"></p>
<p className="text-sm">{tEngine("selectEngineHint")}</p>
</div>
</div>
)}
</div>
</div>
{/* 编辑引擎弹窗 */}
{/* Edit engine dialog */}
<EngineEditDialog
engine={editingEngine}
open={isEditDialogOpen}
@@ -345,30 +358,30 @@ export default function ScanEnginePage() {
onSave={handleSaveYaml}
/>
{/* 新建引擎弹窗 */}
{/* Create engine dialog */}
<EngineCreateDialog
open={isCreateDialogOpen}
onOpenChange={setIsCreateDialogOpen}
onSave={handleCreateEngine}
/>
{/* 删除确认弹窗 */}
{/* Delete confirmation dialog */}
<AlertDialog open={deleteDialogOpen} onOpenChange={setDeleteDialogOpen}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle></AlertDialogTitle>
<AlertDialogTitle>{tConfirm("deleteTitle")}</AlertDialogTitle>
<AlertDialogDescription>
{engineToDelete?.name}
{tConfirm("deleteEngineMessage", { name: engineToDelete?.name ?? "" })}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel></AlertDialogCancel>
<AlertDialogCancel>{tCommon("actions.cancel")}</AlertDialogCancel>
<AlertDialogAction
onClick={confirmDelete}
className="bg-destructive text-destructive-foreground hover:bg-destructive/90"
disabled={deleteEngineMutation.isPending}
>
{deleteEngineMutation.isPending ? "删除中..." : "删除"}
{deleteEngineMutation.isPending ? tConfirm("deleting") : tCommon("actions.delete")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>

View File

@@ -0,0 +1,228 @@
"use client"
import React from "react"
import { usePathname, useParams } from "next/navigation"
import Link from "next/link"
import { Target, LayoutDashboard, Package, FolderSearch, Image, ShieldAlert } from "lucide-react"
import { Tabs, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { Badge } from "@/components/ui/badge"
import { Skeleton } from "@/components/ui/skeleton"
import { useScan } from "@/hooks/use-scans"
import { useTranslations } from "next-intl"
export default function ScanHistoryLayout({
children,
}: {
children: React.ReactNode
}) {
const { id } = useParams<{ id: string }>()
const pathname = usePathname()
const { data: scanData, isLoading } = useScan(parseInt(id))
const t = useTranslations("scan.history")
// Get primary navigation active tab
const getPrimaryTab = () => {
if (pathname.includes("/overview")) return "overview"
if (pathname.includes("/directories")) return "directories"
if (pathname.includes("/screenshots")) return "screenshots"
if (pathname.includes("/vulnerabilities")) return "vulnerabilities"
// All asset pages fall under "assets"
if (
pathname.includes("/websites") ||
pathname.includes("/subdomain") ||
pathname.includes("/ip-addresses") ||
pathname.includes("/endpoints")
) {
return "assets"
}
return "overview"
}
// Get secondary navigation active tab (for assets)
const getSecondaryTab = () => {
if (pathname.includes("/websites")) return "websites"
if (pathname.includes("/subdomain")) return "subdomain"
if (pathname.includes("/ip-addresses")) return "ip-addresses"
if (pathname.includes("/endpoints")) return "endpoints"
return "websites"
}
// Check if we should show secondary navigation
const showSecondaryNav = getPrimaryTab() === "assets"
const basePath = `/scan/history/${id}`
const primaryPaths = {
overview: `${basePath}/overview/`,
assets: `${basePath}/websites/`, // Default to websites when clicking assets
directories: `${basePath}/directories/`,
screenshots: `${basePath}/screenshots/`,
vulnerabilities: `${basePath}/vulnerabilities/`,
}
const secondaryPaths = {
websites: `${basePath}/websites/`,
subdomain: `${basePath}/subdomain/`,
"ip-addresses": `${basePath}/ip-addresses/`,
endpoints: `${basePath}/endpoints/`,
}
// Get counts for each tab from scan data
const stats = scanData?.cachedStats
const counts = {
subdomain: stats?.subdomainsCount || 0,
endpoints: stats?.endpointsCount || 0,
websites: stats?.websitesCount || 0,
directories: stats?.directoriesCount || 0,
screenshots: stats?.screenshotsCount || 0,
vulnerabilities: stats?.vulnsTotal || 0,
"ip-addresses": stats?.ipsCount || 0,
}
// Calculate total assets count
const totalAssets = counts.websites + counts.subdomain + counts["ip-addresses"] + counts.endpoints
// Loading state
if (isLoading) {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* Header skeleton */}
<div className="flex items-center gap-2 px-4 lg:px-6">
<Skeleton className="h-4 w-16" />
<span className="text-muted-foreground">/</span>
<Skeleton className="h-4 w-32" />
</div>
{/* Tabs skeleton */}
<div className="flex gap-1 px-4 lg:px-6">
<Skeleton className="h-9 w-20" />
<Skeleton className="h-9 w-20" />
<Skeleton className="h-9 w-24" />
</div>
</div>
)
}
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6 h-full">
{/* Header: Page label + Scan info */}
<div className="flex items-center gap-2 text-sm px-4 lg:px-6">
<span className="text-muted-foreground">{t("breadcrumb.scanHistory")}</span>
<span className="text-muted-foreground">/</span>
<span className="font-medium flex items-center gap-1.5">
<Target className="h-4 w-4" />
{(scanData?.target as any)?.name || t("taskId", { id })}
</span>
</div>
{/* Primary navigation */}
<div className="px-4 lg:px-6">
<Tabs value={getPrimaryTab()}>
<TabsList>
<TabsTrigger value="overview" asChild>
<Link href={primaryPaths.overview} className="flex items-center gap-1.5">
<LayoutDashboard className="h-4 w-4" />
{t("tabs.overview")}
</Link>
</TabsTrigger>
<TabsTrigger value="assets" asChild>
<Link href={primaryPaths.assets} className="flex items-center gap-1.5">
<Package className="h-4 w-4" />
{t("tabs.assets")}
{totalAssets > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{totalAssets}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="directories" asChild>
<Link href={primaryPaths.directories} className="flex items-center gap-1.5">
<FolderSearch className="h-4 w-4" />
{t("tabs.directories")}
{counts.directories > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.directories}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="screenshots" asChild>
<Link href={primaryPaths.screenshots} className="flex items-center gap-1.5">
<Image className="h-4 w-4" />
{t("tabs.screenshots")}
{counts.screenshots > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.screenshots}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="vulnerabilities" asChild>
<Link href={primaryPaths.vulnerabilities} className="flex items-center gap-1.5">
<ShieldAlert className="h-4 w-4" />
{t("tabs.vulnerabilities")}
{counts.vulnerabilities > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.vulnerabilities}
</Badge>
)}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
</div>
{/* Secondary navigation (only for assets) */}
{showSecondaryNav && (
<div className="flex items-center px-4 lg:px-6">
<Tabs value={getSecondaryTab()} className="w-full">
<TabsList variant="underline">
<TabsTrigger value="websites" variant="underline" asChild>
<Link href={secondaryPaths.websites} className="flex items-center gap-0.5">
{t("tabs.websites")}
{counts.websites > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.websites}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="subdomain" variant="underline" asChild>
<Link href={secondaryPaths.subdomain} className="flex items-center gap-0.5">
{t("tabs.subdomains")}
{counts.subdomain > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.subdomain}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="ip-addresses" variant="underline" asChild>
<Link href={secondaryPaths["ip-addresses"]} className="flex items-center gap-0.5">
{t("tabs.ips")}
{counts["ip-addresses"] > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts["ip-addresses"]}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="endpoints" variant="underline" asChild>
<Link href={secondaryPaths.endpoints} className="flex items-center gap-0.5">
{t("tabs.urls")}
{counts.endpoints > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.endpoints}
</Badge>
)}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
</div>
)}
{/* Sub-page content */}
{children}
</div>
)
}

View File

@@ -0,0 +1,19 @@
"use client"
import { useParams } from "next/navigation"
import { ScanOverview } from "@/components/scan/history/scan-overview"
/**
* Scan overview page
* Displays scan statistics and summary information
*/
export default function ScanOverviewPage() {
const { id } = useParams<{ id: string }>()
const scanId = Number(id)
return (
<div className="flex-1 flex flex-col min-h-0 px-4 lg:px-6">
<ScanOverview scanId={scanId} />
</div>
)
}

View File

@@ -8,7 +8,7 @@ export default function ScanHistoryDetailPage() {
const router = useRouter()
useEffect(() => {
router.replace(`/scan/history/${id}/subdomain/`)
router.replace(`/scan/history/${id}/overview/`)
}, [id, router])
return null

View File

@@ -0,0 +1,15 @@
"use client"
import { useParams } from "next/navigation"
import { ScreenshotsGallery } from "@/components/screenshots/screenshots-gallery"
export default function ScanScreenshotsPage() {
const { id } = useParams<{ id: string }>()
const scanId = Number(id)
return (
<div className="px-4 lg:px-6">
<ScreenshotsGallery scanId={scanId} />
</div>
)
}

View File

@@ -8,7 +8,7 @@ export default function ScanHistoryVulnerabilitiesPage() {
const { id } = useParams<{ id: string }>()
return (
<div className="relative flex flex-col gap-4 overflow-auto px-4 lg:px-6">
<div className="px-4 lg:px-6">
<VulnerabilitiesDetailView scanId={Number(id)} />
</div>
)

View File

@@ -1,31 +1,34 @@
"use client"
import { useTranslations } from "next-intl"
import { IconRadar } from "@tabler/icons-react"
import { ScanHistoryList } from "@/components/scan/history/scan-history-list"
import { ScanHistoryStatCards } from "@/components/scan/history/scan-history-stat-cards"
/**
*
*
* Scan history page
* Displays historical records of all scan tasks
*/
export default function ScanHistoryPage() {
const t = useTranslations("scan.history")
return (
<div className="@container/main flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 页面标题 */}
{/* Page title */}
<div className="flex items-center gap-3 px-4 lg:px-6">
<IconRadar className="size-8 text-primary" />
<div>
<h1 className="text-3xl font-bold"></h1>
<p className="text-muted-foreground"></p>
<h1 className="text-3xl font-bold">{t("title")}</h1>
<p className="text-muted-foreground">{t("description")}</p>
</div>
</div>
{/* 统计卡片 */}
{/* Statistics cards */}
<div className="px-4 lg:px-6">
<ScanHistoryStatCards />
</div>
{/* 扫描历史列表 */}
{/* Scan history list */}
<div className="px-4 lg:px-6">
<ScanHistoryList />
</div>

View File

@@ -1,6 +1,7 @@
"use client"
import React from "react"
import { useTranslations } from "next-intl"
import { ScheduledScanDataTable } from "@/components/scan/scheduled/scheduled-scan-data-table"
import { createScheduledScanColumns } from "@/components/scan/scheduled/scheduled-scan-columns"
import { CreateScheduledScanDialog } from "@/components/scan/scheduled/create-scheduled-scan-dialog"
@@ -24,8 +25,8 @@ import type { ScheduledScan } from "@/types/scheduled-scan.types"
import { DataTableSkeleton } from "@/components/ui/data-table-skeleton"
/**
*
*
* Scheduled scan page
* Manage scheduled scan task configuration
*/
export default function ScheduledScanPage() {
const [createDialogOpen, setCreateDialogOpen] = React.useState(false)
@@ -34,11 +35,50 @@ export default function ScheduledScanPage() {
const [deleteDialogOpen, setDeleteDialogOpen] = React.useState(false)
const [deletingScheduledScan, setDeletingScheduledScan] = React.useState<ScheduledScan | null>(null)
// 分页状态
// Internationalization
const tColumns = useTranslations("columns")
const tCommon = useTranslations("common")
const tScan = useTranslations("scan")
const tConfirm = useTranslations("common.confirm")
// Build translation object
const translations = React.useMemo(() => ({
columns: {
taskName: tColumns("scheduledScan.taskName"),
scanEngine: tColumns("scheduledScan.scanEngine"),
cronExpression: tColumns("scheduledScan.cronExpression"),
scope: tColumns("scheduledScan.scope"),
status: tColumns("common.status"),
nextRun: tColumns("scheduledScan.nextRun"),
runCount: tColumns("scheduledScan.runCount"),
lastRun: tColumns("scheduledScan.lastRun"),
},
actions: {
editTask: tScan("editTask"),
delete: tCommon("actions.delete"),
openMenu: tCommon("actions.openMenu"),
},
status: {
enabled: tCommon("status.enabled"),
disabled: tCommon("status.disabled"),
},
cron: {
everyMinute: tScan("cron.everyMinute"),
everyNMinutes: tScan.raw("cron.everyNMinutes") as string,
everyHour: tScan.raw("cron.everyHour") as string,
everyNHours: tScan.raw("cron.everyNHours") as string,
everyDay: tScan.raw("cron.everyDay") as string,
everyWeek: tScan.raw("cron.everyWeek") as string,
everyMonth: tScan.raw("cron.everyMonth") as string,
weekdays: tScan.raw("cron.weekdays") as string[],
},
}), [tColumns, tCommon, tScan])
// Pagination state
const [page, setPage] = React.useState(1)
const [pageSize, setPageSize] = React.useState(10)
// 搜索状态
// Search state
const [searchQuery, setSearchQuery] = React.useState("")
const [isSearching, setIsSearching] = React.useState(false)
@@ -48,10 +88,10 @@ export default function ScheduledScanPage() {
setPage(1)
}
// 使用实际 API
// Use actual API
const { data, isLoading, isFetching, refetch } = useScheduledScans({ page, pageSize, search: searchQuery || undefined })
// 当请求完成时重置搜索状态
// Reset search state when request completes
React.useEffect(() => {
if (!isFetching && isSearching) {
setIsSearching(false)
@@ -64,7 +104,7 @@ export default function ScheduledScanPage() {
const total = data?.total || 0
const totalPages = data?.totalPages || 1
// 格式化日期
// Format date
const formatDate = React.useCallback((dateString: string) => {
const date = new Date(dateString)
return date.toLocaleString("zh-CN", {
@@ -76,19 +116,19 @@ export default function ScheduledScanPage() {
})
}, [])
// 编辑任务
// Edit task
const handleEdit = React.useCallback((scan: ScheduledScan) => {
setEditingScheduledScan(scan)
setEditDialogOpen(true)
}, [])
// 删除任务(打开确认弹窗)
// Delete task (open confirmation dialog)
const handleDelete = React.useCallback((scan: ScheduledScan) => {
setDeletingScheduledScan(scan)
setDeleteDialogOpen(true)
}, [])
// 确认删除任务
// Confirm delete task
const confirmDelete = React.useCallback(() => {
if (deletingScheduledScan) {
deleteScheduledScan(deletingScheduledScan.id)
@@ -97,28 +137,28 @@ export default function ScheduledScanPage() {
}
}, [deletingScheduledScan, deleteScheduledScan])
// 切换任务启用状态
// Toggle task enabled status
const handleToggleStatus = React.useCallback((scan: ScheduledScan, enabled: boolean) => {
toggleScheduledScan({ id: scan.id, isEnabled: enabled })
}, [toggleScheduledScan])
// 页码变化处理
// Page change handler
const handlePageChange = React.useCallback((newPage: number) => {
setPage(newPage)
}, [])
// 每页数量变化处理
// Page size change handler
const handlePageSizeChange = React.useCallback((newPageSize: number) => {
setPageSize(newPageSize)
setPage(1) // 重置到第一页
setPage(1) // Reset to first page
}, [])
// 添加新任务
// Add new task
const handleAddNew = React.useCallback(() => {
setCreateDialogOpen(true)
}, [])
// 创建列定义
// Create column definition
const columns = React.useMemo(
() =>
createScheduledScanColumns({
@@ -126,8 +166,9 @@ export default function ScheduledScanPage() {
handleEdit,
handleDelete,
handleToggleStatus,
t: translations,
}),
[formatDate, handleEdit, handleDelete, handleToggleStatus]
[formatDate, handleEdit, handleDelete, handleToggleStatus, translations]
)
if (isLoading) {
@@ -135,8 +176,8 @@ export default function ScheduledScanPage() {
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
<div className="flex items-center justify-between px-4 lg:px-6">
<div>
<h1 className="text-3xl font-bold"></h1>
<p className="text-muted-foreground mt-1"></p>
<h1 className="text-3xl font-bold">{tScan("scheduled.title")}</h1>
<p className="text-muted-foreground mt-1">{tScan("scheduled.description")}</p>
</div>
</div>
<DataTableSkeleton
@@ -150,26 +191,25 @@ export default function ScheduledScanPage() {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 页面标题 */}
{/* Page title */}
<div className="px-4 lg:px-6">
<div>
<h1 className="text-3xl font-bold"></h1>
<p className="text-muted-foreground mt-1"></p>
<h1 className="text-3xl font-bold">{tScan("scheduled.title")}</h1>
<p className="text-muted-foreground mt-1">{tScan("scheduled.description")}</p>
</div>
</div>
{/* 数据表格 */}
{/* Data table */}
<div className="px-4 lg:px-6">
<ScheduledScanDataTable
data={scheduledScans}
columns={columns}
onAddNew={handleAddNew}
searchPlaceholder="搜索任务名称..."
searchColumn="name"
searchPlaceholder={tScan("scheduled.searchPlaceholder")}
searchValue={searchQuery}
onSearch={handleSearchChange}
isSearching={isSearching}
addButtonText="新建定时扫描"
addButtonText={tScan("scheduled.createTitle")}
page={page}
pageSize={pageSize}
total={total}
@@ -179,14 +219,14 @@ export default function ScheduledScanPage() {
/>
</div>
{/* 新建定时扫描对话框 */}
{/* Create scheduled scan dialog */}
<CreateScheduledScanDialog
open={createDialogOpen}
onOpenChange={setCreateDialogOpen}
onSuccess={() => refetch()}
/>
{/* 编辑定时扫描对话框 */}
{/* Edit scheduled scan dialog */}
<EditScheduledScanDialog
open={editDialogOpen}
onOpenChange={setEditDialogOpen}
@@ -194,19 +234,19 @@ export default function ScheduledScanPage() {
onSuccess={() => refetch()}
/>
{/* 删除确认弹窗 */}
{/* Delete confirmation dialog */}
<AlertDialog open={deleteDialogOpen} onOpenChange={setDeleteDialogOpen}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle></AlertDialogTitle>
<AlertDialogTitle>{tConfirm("deleteTitle")}</AlertDialogTitle>
<AlertDialogDescription>
&quot;{deletingScheduledScan?.name}&quot;
{tConfirm("deleteScheduledScanMessage", { name: deletingScheduledScan?.name ?? "" })}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel></AlertDialogCancel>
<AlertDialogCancel>{tCommon("actions.cancel")}</AlertDialogCancel>
<AlertDialogAction onClick={confirmDelete} className="bg-destructive text-destructive-foreground hover:bg-destructive/90">
{tCommon("actions.delete")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>

View File

@@ -0,0 +1,5 @@
import { SearchPage } from "@/components/search"
export default function Search() {
return <SearchPage />
}

View File

@@ -0,0 +1,306 @@
"use client"
import React, { useState, useEffect } from 'react'
import { IconEye, IconEyeOff, IconWorldSearch, IconRadar2 } from '@tabler/icons-react'
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'
import { Button } from '@/components/ui/button'
import { Input } from '@/components/ui/input'
import { Switch } from '@/components/ui/switch'
import { Separator } from '@/components/ui/separator'
import { Badge } from '@/components/ui/badge'
import { Skeleton } from '@/components/ui/skeleton'
import { useApiKeySettings, useUpdateApiKeySettings } from '@/hooks/use-api-key-settings'
import type { ApiKeySettings } from '@/types/api-key-settings.types'
// 密码输入框组件(带显示/隐藏切换)
function PasswordInput({ value, onChange, placeholder, disabled }: {
value: string
onChange: (value: string) => void
placeholder?: string
disabled?: boolean
}) {
const [show, setShow] = useState(false)
return (
<div className="relative">
<Input
type={show ? 'text' : 'password'}
value={value}
onChange={(e) => onChange(e.target.value)}
placeholder={placeholder}
disabled={disabled}
className="pr-10"
/>
<button
type="button"
onClick={() => setShow(!show)}
className="absolute right-3 top-1/2 -translate-y-1/2 text-muted-foreground hover:text-foreground"
>
{show ? <IconEyeOff className="h-4 w-4" /> : <IconEye className="h-4 w-4" />}
</button>
</div>
)
}
// Provider 配置定义
const PROVIDERS = [
{
key: 'fofa',
name: 'FOFA',
description: '网络空间测绘平台,提供全球互联网资产搜索',
icon: IconWorldSearch,
color: 'text-blue-500',
bgColor: 'bg-blue-500/10',
fields: [
{ name: 'email', label: '邮箱', type: 'text', placeholder: 'your@email.com' },
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 FOFA API Key' },
],
docUrl: 'https://fofa.info/api',
},
{
key: 'hunter',
name: 'Hunter (鹰图)',
description: '奇安信威胁情报平台,提供网络空间资产测绘',
icon: IconRadar2,
color: 'text-orange-500',
bgColor: 'bg-orange-500/10',
fields: [
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 Hunter API Key' },
],
docUrl: 'https://hunter.qianxin.com/',
},
{
key: 'shodan',
name: 'Shodan',
description: '全球最大的互联网设备搜索引擎',
icon: IconWorldSearch,
color: 'text-red-500',
bgColor: 'bg-red-500/10',
fields: [
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 Shodan API Key' },
],
docUrl: 'https://developer.shodan.io/',
},
{
key: 'censys',
name: 'Censys',
description: '互联网资产搜索和监控平台',
icon: IconWorldSearch,
color: 'text-purple-500',
bgColor: 'bg-purple-500/10',
fields: [
{ name: 'apiId', label: 'API ID', type: 'text', placeholder: '输入 Censys API ID' },
{ name: 'apiSecret', label: 'API Secret', type: 'password', placeholder: '输入 Censys API Secret' },
],
docUrl: 'https://search.censys.io/api',
},
{
key: 'zoomeye',
name: 'ZoomEye (钟馗之眼)',
description: '知道创宇网络空间搜索引擎',
icon: IconWorldSearch,
color: 'text-green-500',
bgColor: 'bg-green-500/10',
fields: [
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 ZoomEye API Key' },
],
docUrl: 'https://www.zoomeye.org/doc',
},
{
key: 'securitytrails',
name: 'SecurityTrails',
description: 'DNS 历史记录和子域名数据平台',
icon: IconWorldSearch,
color: 'text-cyan-500',
bgColor: 'bg-cyan-500/10',
fields: [
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 SecurityTrails API Key' },
],
docUrl: 'https://securitytrails.com/corp/api',
},
{
key: 'threatbook',
name: 'ThreatBook (微步在线)',
description: '威胁情报平台,提供域名和 IP 情报查询',
icon: IconWorldSearch,
color: 'text-indigo-500',
bgColor: 'bg-indigo-500/10',
fields: [
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 ThreatBook API Key' },
],
docUrl: 'https://x.threatbook.com/api',
},
{
key: 'quake',
name: 'Quake (360)',
description: '360 网络空间测绘系统',
icon: IconWorldSearch,
color: 'text-teal-500',
bgColor: 'bg-teal-500/10',
fields: [
{ name: 'apiKey', label: 'API Key', type: 'password', placeholder: '输入 Quake API Key' },
],
docUrl: 'https://quake.360.net/quake/#/help',
},
]
// 默认配置
const DEFAULT_SETTINGS: ApiKeySettings = {
fofa: { enabled: false, email: '', apiKey: '' },
hunter: { enabled: false, apiKey: '' },
shodan: { enabled: false, apiKey: '' },
censys: { enabled: false, apiId: '', apiSecret: '' },
zoomeye: { enabled: false, apiKey: '' },
securitytrails: { enabled: false, apiKey: '' },
threatbook: { enabled: false, apiKey: '' },
quake: { enabled: false, apiKey: '' },
}
export default function ApiKeysSettingsPage() {
const { data: settings, isLoading } = useApiKeySettings()
const updateMutation = useUpdateApiKeySettings()
const [formData, setFormData] = useState<ApiKeySettings>(DEFAULT_SETTINGS)
const [hasChanges, setHasChanges] = useState(false)
// 当数据加载完成后,更新表单数据
useEffect(() => {
if (settings) {
setFormData({ ...DEFAULT_SETTINGS, ...settings })
setHasChanges(false)
}
}, [settings])
const updateProvider = (providerKey: string, field: string, value: any) => {
setFormData(prev => ({
...prev,
[providerKey]: {
...prev[providerKey as keyof ApiKeySettings],
[field]: value,
}
}))
setHasChanges(true)
}
const handleSave = async () => {
updateMutation.mutate(formData)
setHasChanges(false)
}
const enabledCount = Object.values(formData).filter((p: any) => p?.enabled).length
if (isLoading) {
return (
<div className="p-4 md:p-6 space-y-6">
<div>
<Skeleton className="h-8 w-48" />
<Skeleton className="h-4 w-96 mt-2" />
</div>
<div className="grid gap-4">
{[1, 2, 3].map((i) => (
<Skeleton key={i} className="h-24 w-full" />
))}
</div>
</div>
)
}
return (
<div className="p-4 md:p-6 space-y-6">
{/* 页面标题 */}
<div>
<div className="flex items-center gap-2">
<h1 className="text-2xl font-semibold">API </h1>
{enabledCount > 0 && (
<Badge variant="secondary">{enabledCount} </Badge>
)}
</div>
<p className="text-muted-foreground mt-1">
API subfinder 使
</p>
</div>
{/* Provider 卡片列表 */}
<div className="grid gap-4">
{PROVIDERS.map((provider) => {
const data = formData[provider.key as keyof ApiKeySettings] || {}
const isEnabled = (data as any)?.enabled || false
return (
<Card key={provider.key}>
<CardHeader className="pb-4">
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<div className={`flex h-10 w-10 items-center justify-center rounded-lg ${provider.bgColor}`}>
<provider.icon className={`h-5 w-5 ${provider.color}`} />
</div>
<div>
<div className="flex items-center gap-2">
<CardTitle className="text-base">{provider.name}</CardTitle>
{isEnabled && <Badge variant="outline" className="text-xs text-green-600"></Badge>}
</div>
<CardDescription>{provider.description}</CardDescription>
</div>
</div>
<Switch
checked={isEnabled}
onCheckedChange={(checked) => updateProvider(provider.key, 'enabled', checked)}
/>
</div>
</CardHeader>
{/* 展开的配置表单 */}
{isEnabled && (
<CardContent className="pt-0">
<Separator className="mb-4" />
<div className="space-y-4">
{provider.fields.map((field) => (
<div key={field.name} className="space-y-2">
<label className="text-sm font-medium">{field.label}</label>
{field.type === 'password' ? (
<PasswordInput
value={(data as any)[field.name] || ''}
onChange={(value) => updateProvider(provider.key, field.name, value)}
placeholder={field.placeholder}
/>
) : (
<Input
type="text"
value={(data as any)[field.name] || ''}
onChange={(e) => updateProvider(provider.key, field.name, e.target.value)}
placeholder={field.placeholder}
/>
)}
</div>
))}
<p className="text-xs text-muted-foreground">
API Key
<a
href={provider.docUrl}
target="_blank"
rel="noopener noreferrer"
className="text-primary hover:underline ml-1"
>
{provider.docUrl}
</a>
</p>
</div>
</CardContent>
)}
</Card>
)
})}
</div>
{/* 保存按钮 */}
<div className="flex justify-end">
<Button
onClick={handleSave}
disabled={updateMutation.isPending || !hasChanges}
>
{updateMutation.isPending ? '保存中...' : '保存配置'}
</Button>
</div>
</div>
)
}

View File

@@ -0,0 +1,132 @@
"use client"
import React, { useState, useEffect } from "react"
import { useTranslations } from "next-intl"
import { AlertTriangle, Loader2, Ban } from "lucide-react"
import { Button } from "@/components/ui/button"
import { Textarea } from "@/components/ui/textarea"
import { Skeleton } from "@/components/ui/skeleton"
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card"
import { useGlobalBlacklist, useUpdateGlobalBlacklist } from "@/hooks/use-global-blacklist"
/**
* Global blacklist settings page
*/
export default function GlobalBlacklistPage() {
const t = useTranslations("pages.settings.blacklist")
const [blacklistText, setBlacklistText] = useState("")
const [hasChanges, setHasChanges] = useState(false)
const { data, isLoading, error } = useGlobalBlacklist()
const updateBlacklist = useUpdateGlobalBlacklist()
// Initialize text when data loads
useEffect(() => {
if (data?.patterns) {
setBlacklistText(data.patterns.join("\n"))
setHasChanges(false)
}
}, [data])
// Handle text change
const handleTextChange = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
setBlacklistText(e.target.value)
setHasChanges(true)
}
// Handle save
const handleSave = () => {
const patterns = blacklistText
.split("\n")
.map((line) => line.trim())
.filter((line) => line.length > 0)
updateBlacklist.mutate(
{ patterns },
{
onSuccess: () => {
setHasChanges(false)
},
}
)
}
if (isLoading) {
return (
<div className="flex flex-1 flex-col gap-4 p-4">
<div className="space-y-2">
<Skeleton className="h-8 w-48" />
<Skeleton className="h-4 w-96" />
</div>
<Skeleton className="h-[400px] w-full" />
</div>
)
}
if (error) {
return (
<div className="flex flex-1 flex-col items-center justify-center py-12">
<AlertTriangle className="h-10 w-10 text-destructive mb-4" />
<p className="text-muted-foreground">{t("loadError")}</p>
</div>
)
}
return (
<div className="flex flex-1 flex-col gap-4 p-4">
{/* Page header */}
<div>
<h1 className="text-2xl font-bold">{t("title")}</h1>
<p className="text-muted-foreground">{t("description")}</p>
</div>
{/* Blacklist card */}
<Card>
<CardHeader>
<div className="flex items-center gap-2">
<Ban className="h-5 w-5 text-muted-foreground" />
<CardTitle>{t("card.title")}</CardTitle>
</div>
<CardDescription>{t("card.description")}</CardDescription>
</CardHeader>
<CardContent className="space-y-4">
{/* Rules hint */}
<div className="flex flex-wrap items-center gap-x-4 gap-y-2 text-sm text-muted-foreground">
<span className="font-medium text-foreground">{t("rules.title")}:</span>
<span><code className="bg-muted px-1.5 py-0.5 rounded text-xs">*.gov</code> {t("rules.domain")}</span>
<span><code className="bg-muted px-1.5 py-0.5 rounded text-xs">*cdn*</code> {t("rules.keyword")}</span>
<span><code className="bg-muted px-1.5 py-0.5 rounded text-xs">192.168.1.1</code> {t("rules.ip")}</span>
<span><code className="bg-muted px-1.5 py-0.5 rounded text-xs">10.0.0.0/8</code> {t("rules.cidr")}</span>
</div>
{/* Scope hint */}
<div className="rounded-lg border bg-muted/50 p-3 text-sm">
<p className="text-muted-foreground">{t("scopeHint")}</p>
</div>
{/* Input */}
<Textarea
value={blacklistText}
onChange={handleTextChange}
placeholder={t("placeholder")}
className="min-h-[320px] font-mono text-sm"
/>
{/* Save button */}
<div className="flex justify-end">
<Button
onClick={handleSave}
disabled={!hasChanges || updateBlacklist.isPending}
>
{updateBlacklist.isPending && (
<Loader2 className="mr-2 h-4 w-4 animate-spin" />
)}
{t("save")}
</Button>
</div>
</CardContent>
</Card>
</div>
)
}

View File

@@ -2,6 +2,7 @@
import React from 'react'
import { useForm } from 'react-hook-form'
import { useTranslations } from 'next-intl'
import { zodResolver } from '@hookform/resolvers/zod'
import * as z from 'zod'
import { IconBrandDiscord, IconMail, IconBrandSlack, IconScan, IconShieldCheck, IconWorld, IconSettings } from '@tabler/icons-react'
@@ -16,66 +17,82 @@ import { Separator } from '@/components/ui/separator'
import { Badge } from '@/components/ui/badge'
import { useNotificationSettings, useUpdateNotificationSettings } from '@/hooks/use-notification-settings'
const schema = z
.object({
discord: z.object({
enabled: z.boolean(),
webhookUrl: z.string().url('请输入有效的 Discord Webhook URL').or(z.literal('')),
}),
categories: z.object({
scan: z.boolean(), // 扫描任务
vulnerability: z.boolean(), // 漏洞发现
asset: z.boolean(), // 资产发现
system: z.boolean(), // 系统消息
}),
})
.superRefine((val, ctx) => {
if (val.discord.enabled) {
if (!val.discord.webhookUrl || val.discord.webhookUrl.trim() === '') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: '启用 Discord 时必须填写 Webhook URL',
path: ['discord', 'webhookUrl'],
})
}
}
})
const NOTIFICATION_CATEGORIES = [
{
key: 'scan' as const,
label: '扫描任务',
description: '扫描启动、进度、完成、失败等通知',
icon: IconScan,
},
{
key: 'vulnerability' as const,
label: '漏洞发现',
description: '发现安全漏洞时通知',
icon: IconShieldCheck,
},
{
key: 'asset' as const,
label: '资产发现',
description: '发现新子域名、IP、端口等资产',
icon: IconWorld,
},
{
key: 'system' as const,
label: '系统消息',
description: '系统级通知和公告',
icon: IconSettings,
},
]
export default function NotificationSettingsPage() {
const t = useTranslations("settings.notifications")
const { data, isLoading } = useNotificationSettings()
const updateMutation = useUpdateNotificationSettings()
// Schema with translations
const schema = z
.object({
discord: z.object({
enabled: z.boolean(),
webhookUrl: z.string().url(t("discord.urlInvalid")).or(z.literal('')),
}),
wecom: z.object({
enabled: z.boolean(),
webhookUrl: z.string().url(t("wecom.urlInvalid")).or(z.literal('')),
}),
categories: z.object({
scan: z.boolean(),
vulnerability: z.boolean(),
asset: z.boolean(),
system: z.boolean(),
}),
})
.superRefine((val, ctx) => {
if (val.discord.enabled) {
if (!val.discord.webhookUrl || val.discord.webhookUrl.trim() === '') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: t("discord.requiredError"),
path: ['discord', 'webhookUrl'],
})
}
}
if (val.wecom.enabled) {
if (!val.wecom.webhookUrl || val.wecom.webhookUrl.trim() === '') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: t("wecom.requiredError"),
path: ['wecom', 'webhookUrl'],
})
}
}
})
const NOTIFICATION_CATEGORIES = [
{
key: 'scan' as const,
label: t("categories.scan"),
description: t("categories.scanDesc"),
icon: IconScan,
},
{
key: 'vulnerability' as const,
label: t("categories.vulnerability"),
description: t("categories.vulnerabilityDesc"),
icon: IconShieldCheck,
},
{
key: 'asset' as const,
label: t("categories.asset"),
description: t("categories.assetDesc"),
icon: IconWorld,
},
{
key: 'system' as const,
label: t("categories.system"),
description: t("categories.systemDesc"),
icon: IconSettings,
},
]
const form = useForm<z.infer<typeof schema>>({
resolver: zodResolver(schema),
values: data ?? {
discord: { enabled: false, webhookUrl: '' },
wecom: { enabled: false, webhookUrl: '' },
categories: {
scan: true,
vulnerability: true,
@@ -90,25 +107,26 @@ export default function NotificationSettingsPage() {
}
const discordEnabled = form.watch('discord.enabled')
const wecomEnabled = form.watch('wecom.enabled')
return (
<div className="p-4 md:p-6 space-y-6">
<div>
<h1 className="text-2xl font-semibold"></h1>
<p className="text-muted-foreground mt-1"></p>
<h1 className="text-2xl font-semibold">{t("pageTitle")}</h1>
<p className="text-muted-foreground mt-1">{t("pageDesc")}</p>
</div>
<Tabs defaultValue="channels" className="w-full">
<TabsList>
<TabsTrigger value="channels"></TabsTrigger>
<TabsTrigger value="preferences"></TabsTrigger>
<TabsTrigger value="channels">{t("tabs.channels")}</TabsTrigger>
<TabsTrigger value="preferences">{t("tabs.preferences")}</TabsTrigger>
</TabsList>
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)}>
{/* 推送渠道 Tab */}
{/* Push channels tab */}
<TabsContent value="channels" className="space-y-4 mt-4">
{/* Discord 卡片 */}
{/* Discord card */}
<Card>
<CardHeader className="pb-4">
<div className="flex items-center justify-between">
@@ -117,8 +135,8 @@ export default function NotificationSettingsPage() {
<IconBrandDiscord className="h-5 w-5 text-[#5865F2]" />
</div>
<div>
<CardTitle className="text-base">Discord</CardTitle>
<CardDescription> Discord </CardDescription>
<CardTitle className="text-base">{t("discord.title")}</CardTitle>
<CardDescription>{t("discord.description")}</CardDescription>
</div>
</div>
<FormField
@@ -144,16 +162,16 @@ export default function NotificationSettingsPage() {
name="discord.webhookUrl"
render={({ field }) => (
<FormItem>
<FormLabel>Webhook URL</FormLabel>
<FormLabel>{t("discord.webhookLabel")}</FormLabel>
<FormControl>
<Input
placeholder="https://discord.com/api/webhooks/..."
placeholder={t("discord.webhookPlaceholder")}
{...field}
disabled={isLoading || updateMutation.isPending}
/>
</FormControl>
<FormDescription>
Discord Webhook
{t("discord.webhookHelp")}
</FormDescription>
<FormMessage />
</FormItem>
@@ -163,7 +181,7 @@ export default function NotificationSettingsPage() {
)}
</Card>
{/* 邮件 - 即将支持 */}
{/* Email - Coming soon */}
<Card className="opacity-60">
<CardHeader className="pb-4">
<div className="flex items-center justify-between">
@@ -173,10 +191,10 @@ export default function NotificationSettingsPage() {
</div>
<div>
<div className="flex items-center gap-2">
<CardTitle className="text-base"></CardTitle>
<Badge variant="secondary" className="text-xs"></Badge>
<CardTitle className="text-base">{t("emailChannel.title")}</CardTitle>
<Badge variant="secondary" className="text-xs">{t("emailChannel.comingSoon")}</Badge>
</div>
<CardDescription></CardDescription>
<CardDescription>{t("emailChannel.description")}</CardDescription>
</div>
</div>
<Switch disabled />
@@ -184,34 +202,68 @@ export default function NotificationSettingsPage() {
</CardHeader>
</Card>
{/* 飞书/钉钉/企微 - 即将支持 */}
<Card className="opacity-60">
{/* 企业微信 */}
<Card>
<CardHeader className="pb-4">
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-muted">
<IconBrandSlack className="h-5 w-5 text-muted-foreground" />
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-[#07C160]/10">
<IconBrandSlack className="h-5 w-5 text-[#07C160]" />
</div>
<div>
<div className="flex items-center gap-2">
<CardTitle className="text-base"> / / </CardTitle>
<Badge variant="secondary" className="text-xs"></Badge>
</div>
<CardDescription></CardDescription>
<CardTitle className="text-base">{t("wecom.title")}</CardTitle>
<CardDescription>{t("wecom.description")}</CardDescription>
</div>
</div>
<Switch disabled />
<FormField
control={form.control}
name="wecom.enabled"
render={({ field }) => (
<FormControl>
<Switch
checked={field.value}
onCheckedChange={field.onChange}
disabled={isLoading || updateMutation.isPending}
/>
</FormControl>
)}
/>
</div>
</CardHeader>
{wecomEnabled && (
<CardContent className="pt-0">
<Separator className="mb-4" />
<FormField
control={form.control}
name="wecom.webhookUrl"
render={({ field }) => (
<FormItem>
<FormLabel>{t("wecom.webhookLabel")}</FormLabel>
<FormControl>
<Input
placeholder={t("wecom.webhookPlaceholder")}
{...field}
disabled={isLoading || updateMutation.isPending}
/>
</FormControl>
<FormDescription>
{t("wecom.webhookHelp")}
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
</CardContent>
)}
</Card>
</TabsContent>
{/* 通知偏好 Tab */}
{/* Notification preferences tab */}
<TabsContent value="preferences" className="mt-4">
<Card>
<CardHeader>
<CardTitle className="text-base"></CardTitle>
<CardDescription></CardDescription>
<CardTitle className="text-base">{t("categories.title")}</CardTitle>
<CardDescription>{t("categories.description")}</CardDescription>
</CardHeader>
<CardContent className="space-y-1">
{NOTIFICATION_CATEGORIES.map((category) => (
@@ -249,10 +301,10 @@ export default function NotificationSettingsPage() {
</Card>
</TabsContent>
{/* 保存按钮 */}
{/* Save button */}
<div className="flex justify-end mt-6">
<Button type="submit" disabled={updateMutation.isPending || isLoading}>
{t("saveSettings")}
</Button>
</div>
</form>

View File

@@ -0,0 +1,11 @@
"use client"
import { SystemLogsView } from "@/components/settings/system-logs"
export default function SystemLogsPage() {
return (
<div className="flex flex-1 flex-col p-4 h-full">
<SystemLogsView />
</div>
)
}

View File

@@ -1,15 +1,18 @@
"use client"
import { WorkerList } from "@/components/settings/workers"
import { useTranslations } from "next-intl"
export default function WorkersPage() {
const t = useTranslations("pages.workers")
return (
<div className="flex flex-1 flex-col gap-4 p-4">
<div className="flex items-center justify-between">
<div>
<h1 className="text-2xl font-bold tracking-tight"></h1>
<h1 className="text-2xl font-bold tracking-tight">{t("title")}</h1>
<p className="text-muted-foreground">
VPS
{t("description")}
</p>
</div>
</div>

View File

@@ -4,16 +4,16 @@ import { useParams, useRouter } from "next/navigation"
import { useEffect } from "react"
/**
*
*
* Target detail page (compatible with old routes)
* Automatically redirects to overview page
*/
export default function TargetDetailsPage() {
const { id } = useParams<{ id: string }>()
const router = useRouter()
useEffect(() => {
// 重定向到子域名页面
router.replace(`/target/${id}/subdomain/`)
// Redirect to overview page
router.replace(`/target/${id}/overview/`)
}, [id, router])
return null

View File

@@ -5,8 +5,8 @@ import { useParams } from "next/navigation"
import { EndpointsDetailView } from "@/components/endpoints"
/**
*
*
* Target endpoints page
* Displays endpoint details under the target
*/
export default function TargetEndpointsPage() {
const { id } = useParams<{ id: string }>()

View File

@@ -0,0 +1,301 @@
"use client"
import React from "react"
import { usePathname, useParams } from "next/navigation"
import Link from "next/link"
import { Target, LayoutDashboard, Package, FolderSearch, Image, ShieldAlert, Settings, HelpCircle } from "lucide-react"
import { Skeleton } from "@/components/ui/skeleton"
import { Tabs, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { Badge } from "@/components/ui/badge"
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip"
import { useTarget } from "@/hooks/use-targets"
import { useTranslations } from "next-intl"
/**
* Target detail layout
* Two-level navigation: Overview / Assets / Vulnerabilities
* Assets has secondary navigation for different asset types
*/
export default function TargetLayout({
children,
}: {
children: React.ReactNode
}) {
const { id } = useParams<{ id: string }>()
const pathname = usePathname()
const t = useTranslations("pages.targetDetail")
// Use React Query to get target data
const {
data: target,
isLoading,
error
} = useTarget(Number(id))
// Get primary navigation active tab
const getPrimaryTab = () => {
if (pathname.includes("/overview")) return "overview"
if (pathname.includes("/directories")) return "directories"
if (pathname.includes("/screenshots")) return "screenshots"
if (pathname.includes("/vulnerabilities")) return "vulnerabilities"
if (pathname.includes("/settings")) return "settings"
// All asset pages fall under "assets"
if (
pathname.includes("/websites") ||
pathname.includes("/subdomain") ||
pathname.includes("/ip-addresses") ||
pathname.includes("/endpoints")
) {
return "assets"
}
return "overview"
}
// Get secondary navigation active tab (for assets)
const getSecondaryTab = () => {
if (pathname.includes("/websites")) return "websites"
if (pathname.includes("/subdomain")) return "subdomain"
if (pathname.includes("/ip-addresses")) return "ip-addresses"
if (pathname.includes("/endpoints")) return "endpoints"
return "websites"
}
// Check if we should show secondary navigation
const showSecondaryNav = getPrimaryTab() === "assets"
// Tab path mapping
const basePath = `/target/${id}`
const primaryPaths = {
overview: `${basePath}/overview/`,
assets: `${basePath}/websites/`, // Default to websites when clicking assets
directories: `${basePath}/directories/`,
screenshots: `${basePath}/screenshots/`,
vulnerabilities: `${basePath}/vulnerabilities/`,
settings: `${basePath}/settings/`,
}
const secondaryPaths = {
websites: `${basePath}/websites/`,
subdomain: `${basePath}/subdomain/`,
"ip-addresses": `${basePath}/ip-addresses/`,
endpoints: `${basePath}/endpoints/`,
}
// Get counts for each tab from target data
const counts = {
subdomain: (target as any)?.summary?.subdomains || 0,
endpoints: (target as any)?.summary?.endpoints || 0,
websites: (target as any)?.summary?.websites || 0,
directories: (target as any)?.summary?.directories || 0,
vulnerabilities: (target as any)?.summary?.vulnerabilities?.total || 0,
"ip-addresses": (target as any)?.summary?.ips || 0,
screenshots: (target as any)?.summary?.screenshots || 0,
}
// Calculate total assets count
const totalAssets = counts.websites + counts.subdomain + counts["ip-addresses"] + counts.endpoints
// Loading state
if (isLoading) {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* Header skeleton */}
<div className="flex items-center gap-2 px-4 lg:px-6">
<Skeleton className="h-4 w-16" />
<span className="text-muted-foreground">/</span>
<Skeleton className="h-4 w-32" />
</div>
{/* Tabs skeleton */}
<div className="flex gap-1 px-4 lg:px-6">
<Skeleton className="h-9 w-20" />
<Skeleton className="h-9 w-20" />
<Skeleton className="h-9 w-24" />
</div>
</div>
)
}
// Error state
if (error) {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
<div className="flex items-center justify-center py-12">
<div className="text-center">
<Target className="mx-auto text-destructive mb-4" />
<h3 className="text-lg font-semibold mb-2">{t("error.title")}</h3>
<p className="text-muted-foreground">
{error.message || t("error.message")}
</p>
</div>
</div>
</div>
)
}
if (!target) {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
<div className="flex items-center justify-center py-12">
<div className="text-center">
<Target className="mx-auto text-muted-foreground mb-4" />
<h3 className="text-lg font-semibold mb-2">{t("notFound.title")}</h3>
<p className="text-muted-foreground">
{t("notFound.message", { id })}
</p>
</div>
</div>
</div>
)
}
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* Header: Page label + Target name */}
<div className="flex items-center gap-2 text-sm px-4 lg:px-6">
<span className="text-muted-foreground">{t("breadcrumb.targetDetail")}</span>
<span className="text-muted-foreground">/</span>
<span className="font-medium flex items-center gap-1.5">
<Target className="h-4 w-4" />
{target.name}
</span>
</div>
{/* Primary navigation */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div className="flex items-center gap-3">
<Tabs value={getPrimaryTab()}>
<TabsList>
<TabsTrigger value="overview" asChild>
<Link href={primaryPaths.overview} className="flex items-center gap-1.5">
<LayoutDashboard className="h-4 w-4" />
{t("tabs.overview")}
</Link>
</TabsTrigger>
<TabsTrigger value="assets" asChild>
<Link href={primaryPaths.assets} className="flex items-center gap-1.5">
<Package className="h-4 w-4" />
{t("tabs.assets")}
{totalAssets > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{totalAssets}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="directories" asChild>
<Link href={primaryPaths.directories} className="flex items-center gap-1.5">
<FolderSearch className="h-4 w-4" />
{t("tabs.directories")}
{counts.directories > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.directories}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="screenshots" asChild>
<Link href={primaryPaths.screenshots} className="flex items-center gap-1.5">
<Image className="h-4 w-4" />
{t("tabs.screenshots")}
{counts.screenshots > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.screenshots}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="vulnerabilities" asChild>
<Link href={primaryPaths.vulnerabilities} className="flex items-center gap-1.5">
<ShieldAlert className="h-4 w-4" />
{t("tabs.vulnerabilities")}
{counts.vulnerabilities > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.vulnerabilities}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="settings" asChild>
<Link href={primaryPaths.settings} className="flex items-center gap-1.5">
<Settings className="h-4 w-4" />
{t("tabs.settings")}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
{getPrimaryTab() === "directories" && (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<HelpCircle className="h-4 w-4 text-muted-foreground cursor-help" />
</TooltipTrigger>
<TooltipContent side="right" className="max-w-sm">
{t("directoriesHelp")}
</TooltipContent>
</Tooltip>
</TooltipProvider>
)}
</div>
</div>
{/* Secondary navigation (only for assets) */}
{showSecondaryNav && (
<div className="flex items-center px-4 lg:px-6">
<Tabs value={getSecondaryTab()} className="w-full">
<TabsList variant="underline">
<TabsTrigger value="websites" variant="underline" asChild>
<Link href={secondaryPaths.websites} className="flex items-center gap-0.5">
{t("tabs.websites")}
{counts.websites > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.websites}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="subdomain" variant="underline" asChild>
<Link href={secondaryPaths.subdomain} className="flex items-center gap-0.5">
{t("tabs.subdomains")}
{counts.subdomain > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.subdomain}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="ip-addresses" variant="underline" asChild>
<Link href={secondaryPaths["ip-addresses"]} className="flex items-center gap-0.5">
{t("tabs.ips")}
{counts["ip-addresses"] > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts["ip-addresses"]}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="endpoints" variant="underline" asChild>
<Link href={secondaryPaths.endpoints} className="flex items-center gap-0.5">
{t("tabs.urls")}
{counts.endpoints > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.endpoints}
</Badge>
)}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
</div>
)}
{/* Sub-page content */}
{children}
</div>
)
}

View File

@@ -0,0 +1,19 @@
"use client"
import { useParams } from "next/navigation"
import { TargetOverview } from "@/components/target/target-overview"
/**
* Target overview page
* Displays target statistics and summary information
*/
export default function TargetOverviewPage() {
const { id } = useParams<{ id: string }>()
const targetId = Number(id)
return (
<div className="px-4 lg:px-6">
<TargetOverview targetId={targetId} />
</div>
)
}

View File

@@ -4,16 +4,16 @@ import { useParams, useRouter } from "next/navigation"
import { useEffect } from "react"
/**
*
*
* Target detail default page
* Automatically redirects to overview page
*/
export default function TargetDetailPage() {
const { id } = useParams<{ id: string }>()
const router = useRouter()
useEffect(() => {
// 重定向到子域名页面
router.replace(`/target/${id}/subdomain/`)
// Redirect to overview page
router.replace(`/target/${id}/overview/`)
}, [id, router])
return null

View File

@@ -0,0 +1,15 @@
"use client"
import { useParams } from "next/navigation"
import { ScreenshotsGallery } from "@/components/screenshots/screenshots-gallery"
export default function ScreenshotsPage() {
const { id } = useParams<{ id: string }>()
const targetId = Number(id)
return (
<div className="px-4 lg:px-6">
<ScreenshotsGallery targetId={targetId} />
</div>
)
}

View File

@@ -0,0 +1,19 @@
"use client"
import { useParams } from "next/navigation"
import { TargetSettings } from "@/components/target/target-settings"
/**
* Target settings page
* Contains blacklist configuration and other settings
*/
export default function TargetSettingsPage() {
const { id } = useParams<{ id: string }>()
const targetId = Number(id)
return (
<div className="px-4 lg:px-6">
<TargetSettings targetId={targetId} />
</div>
)
}

View File

@@ -5,14 +5,14 @@ import { useParams } from "next/navigation"
import { VulnerabilitiesDetailView } from "@/components/vulnerabilities"
/**
*
*
* Target vulnerabilities page
* Displays vulnerability details under the target
*/
export default function TargetVulnerabilitiesPage() {
const { id } = useParams<{ id: string }>()
return (
<div className="relative flex flex-col gap-4 overflow-auto px-4 lg:px-6">
<div className="px-4 lg:px-6">
<VulnerabilitiesDetailView targetId={parseInt(id)} />
</div>
)

View File

@@ -1,23 +1,28 @@
"use client"
import { AllTargetsDetailView } from "@/components/target/all-targets-detail-view"
import { Target } from "lucide-react"
import { useTranslations } from "next-intl"
export default function AllTargetsPage() {
const t = useTranslations("pages.target")
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 页面头部 */}
{/* Page header */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div>
<h2 className="text-2xl font-bold tracking-tight flex items-center gap-2">
<Target />
{t("title")}
</h2>
<p className="text-muted-foreground">
{t("description")}
</p>
</div>
</div>
{/* 内容区域 */}
{/* Content area */}
<div className="px-4 lg:px-6">
<AllTargetsDetailView />
</div>

View File

@@ -0,0 +1,8 @@
/**
* Custom tools page
* Display and manage custom scanning scripts and tools
*/
export default function CustomToolsPage() {
// Tool configuration feature has been deprecated, this page is kept as placeholder to avoid broken historical links
return null
}

View File

@@ -0,0 +1,8 @@
/**
* Open source tools page
* Display and manage open source scanning tools
*/
export default function OpensourceToolsPage() {
// Tool configuration feature has been deprecated, this page is kept as placeholder to avoid broken historical links
return null
}

View File

@@ -0,0 +1,10 @@
"use client"
/**
* Tool configuration page
* Display and manage scanning tool sets (open source tools and custom tools)
*/
export default function ToolConfigPage() {
// Tool configuration feature has been deprecated, this page is kept as placeholder to avoid broken historical links
return null
}

View File

@@ -0,0 +1,12 @@
"use client"
import React from "react"
import { ARLFingerprintView } from "@/components/fingerprints"
export default function ARLFingerprintPage() {
return (
<div className="px-4 lg:px-6">
<ARLFingerprintView />
</div>
)
}

View File

@@ -0,0 +1,12 @@
"use client"
import React from "react"
import { EholeFingerprintView } from "@/components/fingerprints"
export default function EholeFingerprintPage() {
return (
<div className="px-4 lg:px-6">
<EholeFingerprintView />
</div>
)
}

View File

@@ -0,0 +1,12 @@
"use client"
import React from "react"
import { FingerPrintHubFingerprintView } from "@/components/fingerprints"
export default function FingerPrintHubFingerprintPage() {
return (
<div className="px-4 lg:px-6">
<FingerPrintHubFingerprintView />
</div>
)
}

View File

@@ -0,0 +1,12 @@
"use client"
import React from "react"
import { FingersFingerprintView } from "@/components/fingerprints"
export default function FingersFingerprintPage() {
return (
<div className="px-4 lg:px-6">
<FingersFingerprintView />
</div>
)
}

View File

@@ -0,0 +1,11 @@
"use client"
import { GobyFingerprintView } from "@/components/fingerprints"
export default function GobyFingerprintsPage() {
return (
<div className="px-4 lg:px-6">
<GobyFingerprintView />
</div>
)
}

View File

@@ -0,0 +1,178 @@
"use client"
import React from "react"
import { usePathname } from "next/navigation"
import Link from "next/link"
import { Fingerprint, HelpCircle } from "lucide-react"
import { Tabs, TabsList, TabsTrigger } from "@/components/ui/tabs"
import { Badge } from "@/components/ui/badge"
import { Skeleton } from "@/components/ui/skeleton"
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip"
import { useFingerprintStats } from "@/hooks/use-fingerprints"
import { useTranslations } from "next-intl"
/**
* Fingerprint management layout
* Provides tab navigation to switch between different fingerprint libraries
*/
export default function FingerprintsLayout({
children,
}: {
children: React.ReactNode
}) {
const pathname = usePathname()
const { data: stats, isLoading } = useFingerprintStats()
const t = useTranslations("tools.fingerprints")
// Get currently active tab
const getActiveTab = () => {
if (pathname.includes("/ehole")) return "ehole"
if (pathname.includes("/goby")) return "goby"
if (pathname.includes("/wappalyzer")) return "wappalyzer"
if (pathname.includes("/fingers")) return "fingers"
if (pathname.includes("/fingerprinthub")) return "fingerprinthub"
if (pathname.includes("/arl")) return "arl"
return "ehole"
}
// Tab path mapping
const basePath = "/tools/fingerprints"
const tabPaths = {
ehole: `${basePath}/ehole/`,
goby: `${basePath}/goby/`,
wappalyzer: `${basePath}/wappalyzer/`,
fingers: `${basePath}/fingers/`,
fingerprinthub: `${basePath}/fingerprinthub/`,
arl: `${basePath}/arl/`,
}
// Fingerprint library counts
const counts = {
ehole: stats?.ehole || 0,
goby: stats?.goby || 0,
wappalyzer: stats?.wappalyzer || 0,
fingers: stats?.fingers || 0,
fingerprinthub: stats?.fingerprinthub || 0,
arl: stats?.arl || 0,
}
if (isLoading) {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
<div className="flex items-center justify-between px-4 lg:px-6">
<div className="space-y-2">
<Skeleton className="h-7 w-32" />
<Skeleton className="h-4 w-48" />
</div>
</div>
<div className="px-4 lg:px-6">
<Skeleton className="h-10 w-96" />
</div>
</div>
)
}
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* Page header */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div>
<h2 className="text-2xl font-bold tracking-tight flex items-center gap-2">
<Fingerprint className="h-6 w-6" />
{t("title")}
</h2>
<p className="text-muted-foreground">{t("pageDescription")}</p>
</div>
</div>
{/* Tabs navigation */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div className="flex items-center gap-3">
<Tabs value={getActiveTab()} className="w-full">
<TabsList>
<TabsTrigger value="ehole" asChild>
<Link href={tabPaths.ehole} className="flex items-center gap-0.5">
EHole
{counts.ehole > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.ehole}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="goby" asChild>
<Link href={tabPaths.goby} className="flex items-center gap-0.5">
Goby
{counts.goby > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.goby}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="wappalyzer" asChild>
<Link href={tabPaths.wappalyzer} className="flex items-center gap-0.5">
Wappalyzer
{counts.wappalyzer > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.wappalyzer}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="fingers" asChild>
<Link href={tabPaths.fingers} className="flex items-center gap-0.5">
Fingers
{counts.fingers > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.fingers}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="fingerprinthub" asChild>
<Link href={tabPaths.fingerprinthub} className="flex items-center gap-0.5">
FingerPrintHub
{counts.fingerprinthub > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.fingerprinthub}
</Badge>
)}
</Link>
</TabsTrigger>
<TabsTrigger value="arl" asChild>
<Link href={tabPaths.arl} className="flex items-center gap-0.5">
ARL
{counts.arl > 0 && (
<Badge variant="secondary" className="ml-1.5 h-5 min-w-5 rounded-full px-1.5 text-xs">
{counts.arl}
</Badge>
)}
</Link>
</TabsTrigger>
</TabsList>
</Tabs>
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<HelpCircle className="h-4 w-4 text-muted-foreground cursor-help" />
</TooltipTrigger>
<TooltipContent side="right" className="max-w-sm whitespace-pre-line">
{t("helpText")}
</TooltipContent>
</Tooltip>
</TooltipProvider>
</div>
</div>
{/* Sub-page content */}
{children}
</div>
)
}

View File

@@ -0,0 +1,10 @@
"use client"
import { redirect } from "next/navigation"
/**
* Fingerprint management homepage - Redirect to EHole
*/
export default function FingerprintsPage() {
redirect("/tools/fingerprints/ehole/")
}

View File

@@ -0,0 +1,11 @@
"use client"
import { WappalyzerFingerprintView } from "@/components/fingerprints"
export default function WappalyzerFingerprintsPage() {
return (
<div className="px-4 lg:px-6">
<WappalyzerFingerprintView />
</div>
)
}

View File

@@ -1,9 +1,19 @@
"use client"
import { useEffect, useMemo, useState } from "react"
import Editor from "@monaco-editor/react"
import dynamic from "next/dynamic"
import Link from "next/link"
import { useParams } from "next/navigation"
// Dynamic import Monaco Editor to reduce bundle size (~2MB)
const Editor = dynamic(() => import("@monaco-editor/react"), {
ssr: false,
loading: () => (
<div className="flex items-center justify-center h-full">
<div className="text-sm text-muted-foreground">Loading editor...</div>
</div>
),
})
import {
ChevronDown,
ChevronRight,
@@ -30,12 +40,13 @@ import {
} from "@/hooks/use-nuclei-repos"
import type { NucleiTemplateTreeNode } from "@/types/nuclei.types"
import { cn } from "@/lib/utils"
import { useTranslations } from "next-intl"
interface FlattenedNode extends NucleiTemplateTreeNode {
level: number
}
/** 解析 YAML 内容提取模板信息 */
/** Parse YAML content to extract template information */
function parseTemplateInfo(content: string) {
const info: {
id?: string
@@ -45,7 +56,7 @@ function parseTemplateInfo(content: string) {
author?: string
} = {}
// 简单正则提取,不用完整 YAML 解析
// Simple regex extraction, no full YAML parsing
const idMatch = content.match(/^id:\s*(.+)$/m)
if (idMatch) info.id = idMatch[1].trim()
@@ -64,7 +75,7 @@ function parseTemplateInfo(content: string) {
return info
}
/** 严重程度对应的颜色 */
/** Severity level corresponding colors */
function getSeverityColor(severity?: string) {
switch (severity) {
case "critical":
@@ -92,6 +103,8 @@ export default function NucleiRepoDetailPage() {
const [editorValue, setEditorValue] = useState<string>("")
const { currentTheme } = useColorTheme()
const t = useTranslations("tools.nuclei")
const tCommon = useTranslations("common")
const numericRepoId = repoId ? Number(repoId) : null
@@ -100,7 +113,7 @@ export default function NucleiRepoDetailPage() {
const { data: repoDetail } = useNucleiRepo(numericRepoId)
const refreshMutation = useRefreshNucleiRepo()
// 展开的节点和过滤后的节点
// Expanded nodes and filtered nodes
const nodes: FlattenedNode[] = useMemo(() => {
const result: FlattenedNode[] = []
const expandedSet = new Set(expandedPaths)
@@ -118,7 +131,7 @@ export default function NucleiRepoDetailPage() {
continue
}
// 搜索过滤
// Search filter
if (query && isFile && !item.name.toLowerCase().includes(query)) {
continue
}
@@ -126,7 +139,7 @@ export default function NucleiRepoDetailPage() {
result.push({ ...item, level })
if (isFolder && item.children && item.children.length > 0) {
// 搜索时展开所有文件夹,否则按 expandedPaths
// Expand all folders when searching, otherwise follow expandedPaths
if (query || expandedSet.has(item.path)) {
visit(item.children, level + 1)
}
@@ -157,7 +170,7 @@ export default function NucleiRepoDetailPage() {
} else {
setEditorValue("")
}
}, [templateContent?.path])
}, [templateContent])
const toggleFolder = (path: string) => {
setExpandedPaths((prev) =>
@@ -165,9 +178,9 @@ export default function NucleiRepoDetailPage() {
)
}
const repoDisplayName = repoDetail?.name || `仓库 #${repoId}`
const repoDisplayName = repoDetail?.name || t("repoName", { id: repoId })
// 解析当前模板信息
// Parse current template information
const templateInfo = useMemo(() => {
if (!templateContent?.content) return null
return parseTemplateInfo(templateContent.content)
@@ -175,12 +188,12 @@ export default function NucleiRepoDetailPage() {
return (
<div className="flex flex-col h-full">
{/* 顶部:返回 + 标题 + 搜索 + 同步 */}
{/* Top: Back + Title + Search + Sync */}
<div className="flex items-center gap-4 px-4 py-4 lg:px-6">
<Link href="/tools/nuclei/">
<Button variant="ghost" size="sm" className="gap-1.5">
<ArrowLeft className="h-4 w-4" />
{t("back")}
</Button>
</Link>
<h1 className="text-xl font-bold truncate">{repoDisplayName}</h1>
@@ -188,7 +201,7 @@ export default function NucleiRepoDetailPage() {
<div className="relative flex-1">
<Search className="absolute left-2.5 top-1/2 h-4 w-4 -translate-y-1/2 text-muted-foreground" />
<Input
placeholder="搜索模板..."
placeholder={t("searchPlaceholder")}
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
className="pl-8"
@@ -202,28 +215,28 @@ export default function NucleiRepoDetailPage() {
disabled={refreshMutation.isPending || !numericRepoId}
>
<RefreshCw className={cn("h-4 w-4 mr-1.5", refreshMutation.isPending && "animate-spin")} />
{refreshMutation.isPending ? "同步中..." : "同步"}
{refreshMutation.isPending ? t("syncing") : t("sync")}
</Button>
</div>
<Separator />
{/* 主体:左侧目录 + 右侧内容 */}
{/* Main: Left directory + Right content */}
<div className="flex flex-1 min-h-0">
{/* 左侧:模板目录 */}
{/* Left: Template directory */}
<div className="w-72 lg:w-80 border-r flex flex-col">
<div className="px-4 py-3 border-b">
<h2 className="text-sm font-medium text-muted-foreground">
{nodes.filter((n) => n.type === "file").length > 0 &&
`(${nodes.filter((n) => n.type === "file").length} 个模板)`}
{t("templateDirectory")} {nodes.filter((n) => n.type === "file").length > 0 &&
`(${t("templateCount", { count: nodes.filter((n) => n.type === "file").length })})`}
</h2>
</div>
<ScrollArea className="flex-1">
{isLoading ? (
<div className="p-4 text-sm text-muted-foreground">...</div>
<div className="p-4 text-sm text-muted-foreground">{tCommon("status.loading")}</div>
) : isError || nodes.length === 0 ? (
<div className="p-4 text-sm text-muted-foreground">
{searchQuery ? "未找到匹配的模板" : "暂无模板或加载失败"}
{searchQuery ? t("noMatchingTemplate") : t("noTemplateOrLoadFailed")}
</div>
) : (
<div className="p-2">
@@ -245,7 +258,7 @@ export default function NucleiRepoDetailPage() {
}
}}
className={cn(
"flex w-full items-center gap-1.5 rounded-md px-2 py-1.5 text-left text-sm transition-colors",
"tree-node-item flex w-full items-center gap-1.5 rounded-md px-2 py-1.5 text-left text-sm transition-colors",
isFolder && "font-medium",
isActive
? "bg-primary/10 text-primary"
@@ -277,11 +290,11 @@ export default function NucleiRepoDetailPage() {
</ScrollArea>
</div>
{/* 右侧:模板内容 */}
{/* Right: Template content */}
<div className="flex-1 flex flex-col min-w-0">
{selectedPath && templateContent ? (
<>
{/* 模板头部 */}
{/* Template header */}
<div className="px-6 py-4 border-b">
<div className="flex items-start gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-primary/10 shrink-0">
@@ -306,7 +319,7 @@ export default function NucleiRepoDetailPage() {
</div>
</div>
{/* 代码编辑器 */}
{/* Code editor */}
<div className="flex-1 min-h-0">
<Editor
height="100%"
@@ -325,7 +338,7 @@ export default function NucleiRepoDetailPage() {
/>
</div>
{/* 模板信息 */}
{/* Template information */}
{templateInfo && (templateInfo.tags || templateInfo.author) && (
<div className="px-6 py-3 border-t flex items-center gap-4 text-sm">
{templateInfo.tags && templateInfo.tags.length > 0 && (
@@ -355,12 +368,12 @@ export default function NucleiRepoDetailPage() {
)}
</>
) : (
// 未选中状态
// Unselected state
<div className="flex-1 flex items-center justify-center">
<div className="text-center text-muted-foreground">
<FileText className="h-12 w-12 mx-auto mb-3 opacity-50" />
<p className="text-sm"></p>
<p className="text-xs mt-1">使</p>
<p className="text-sm">{t("selectTemplate")}</p>
<p className="text-xs mt-1">{t("useSearch")}</p>
</div>
</div>
)}

View File

@@ -2,7 +2,8 @@
import Link from "next/link"
import { useState, useMemo, type FormEvent } from "react"
import { GitBranch, Search, RefreshCw, Settings, Trash2, FolderOpen } from "lucide-react"
import { GitBranch, Search, RefreshCw, Settings, Trash2, FolderOpen, Plus } from "lucide-react"
import { useTranslations, useLocale } from "next-intl"
import { Button } from "@/components/ui/button"
import { Badge } from "@/components/ui/badge"
import { Input } from "@/components/ui/input"
@@ -30,12 +31,13 @@ import {
} from "@/hooks/use-nuclei-repos"
import { cn } from "@/lib/utils"
import { MasterDetailSkeleton } from "@/components/ui/master-detail-skeleton"
import { getDateLocale } from "@/lib/date-utils"
/** 格式化时间显示 */
function formatDateTime(isoString: string | null) {
/** Format time display */
function formatDateTime(isoString: string | null, locale: string) {
if (!isoString) return "-"
try {
return new Date(isoString).toLocaleString("zh-CN")
return new Date(isoString).toLocaleString(getDateLocale(locale))
} catch {
return isoString
}
@@ -54,6 +56,12 @@ export default function NucleiReposPage() {
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false)
const [repoToDelete, setRepoToDelete] = useState<NucleiRepo | null>(null)
// Internationalization
const tCommon = useTranslations("common")
const tConfirm = useTranslations("common.confirm")
const t = useTranslations("pages.nuclei")
const locale = useLocale()
// API Hooks
const { data: repos, isLoading, isError } = useNucleiRepos()
const createMutation = useCreateNucleiRepo()
@@ -61,7 +69,7 @@ export default function NucleiReposPage() {
const refreshMutation = useRefreshNucleiRepo()
const updateMutation = useUpdateNucleiRepo()
// 过滤仓库列表
// Filter repository list
const filteredRepos = useMemo(() => {
if (!repos) return []
if (!searchQuery.trim()) return repos
@@ -73,7 +81,7 @@ export default function NucleiReposPage() {
)
}, [repos, searchQuery])
// 选中的仓库
// Selected repository
const selectedRepo = useMemo(() => {
if (!selectedId || !repos) return null
return repos.find((r) => r.id === selectedId) || null
@@ -151,21 +159,21 @@ export default function NucleiReposPage() {
)
}
// 加载状态
// Loading state
if (isLoading) {
return <MasterDetailSkeleton title="Nuclei 模板仓库" listItemCount={3} />
return <MasterDetailSkeleton title={t("title")} listItemCount={3} />
}
return (
<div className="flex flex-col h-full">
{/* 顶部:标题 + 搜索 + 新增按钮 */}
{/* Top: Title + Search + Add button */}
<div className="flex items-center justify-between gap-4 px-4 py-4 lg:px-6">
<h1 className="text-2xl font-bold shrink-0">Nuclei </h1>
<h1 className="text-2xl font-bold shrink-0">{t("title")}</h1>
<div className="flex items-center gap-2 flex-1 max-w-md">
<div className="relative flex-1">
<Search className="absolute left-2.5 top-1/2 h-4 w-4 -translate-y-1/2 text-muted-foreground" />
<Input
placeholder="搜索仓库..."
placeholder={t("searchPlaceholder")}
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
className="pl-8"
@@ -173,29 +181,30 @@ export default function NucleiReposPage() {
</div>
</div>
<Button onClick={() => setCreateDialogOpen(true)}>
<Plus className="h-4 w-4" />
{t("addRepo")}
</Button>
</div>
<Separator />
{/* 主体:左侧列表 + 右侧详情 */}
{/* Main: Left list + Right details */}
<div className="flex flex-1 min-h-0">
{/* 左侧:仓库列表 */}
{/* Left: Repository list */}
<div className="w-72 lg:w-80 border-r flex flex-col">
<div className="px-4 py-3 border-b">
<h2 className="text-sm font-medium text-muted-foreground">
({filteredRepos.length})
{t("listTitle")} ({filteredRepos.length})
</h2>
</div>
<ScrollArea className="flex-1">
{isLoading ? (
<div className="p-4 text-sm text-muted-foreground">...</div>
<div className="p-4 text-sm text-muted-foreground">{t("loading")}</div>
) : isError ? (
<div className="p-4 text-sm text-red-500"></div>
<div className="p-4 text-sm text-red-500">{t("loadFailed")}</div>
) : filteredRepos.length === 0 ? (
<div className="p-4 text-sm text-muted-foreground">
{searchQuery ? "未找到匹配的仓库" : "暂无仓库,请先新增"}
{searchQuery ? t("noMatch") : t("noData")}
</div>
) : (
<div className="p-2">
@@ -216,18 +225,18 @@ export default function NucleiReposPage() {
</span>
{repo.lastSyncedAt ? (
<Badge variant="outline" className="bg-green-50 text-green-700 border-green-200 text-xs shrink-0">
{t("synced")}
</Badge>
) : (
<Badge variant="outline" className="text-xs shrink-0">
{t("notSynced")}
</Badge>
)}
</div>
<div className="text-xs text-muted-foreground mt-0.5 truncate">
{repo.lastSyncedAt
? `同步于 ${formatDateTime(repo.lastSyncedAt)}`
: "尚未同步"}
? `${t("syncedAt")} ${formatDateTime(repo.lastSyncedAt, locale)}`
: t("notSyncedYet")}
</div>
</button>
))}
@@ -236,11 +245,11 @@ export default function NucleiReposPage() {
</ScrollArea>
</div>
{/* 右侧:仓库详情 */}
{/* Right: Repository details */}
<div className="flex-1 flex flex-col min-w-0">
{selectedRepo ? (
<>
{/* 详情头部 */}
{/* Details header */}
<div className="px-6 py-4 border-b">
<div className="flex items-start gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-primary/10 shrink-0">
@@ -253,33 +262,33 @@ export default function NucleiReposPage() {
</h2>
{selectedRepo.lastSyncedAt ? (
<Badge variant="outline" className="bg-green-50 text-green-700 border-green-200">
{t("synced")}
</Badge>
) : (
<Badge variant="outline"></Badge>
<Badge variant="outline">{t("notSynced")}</Badge>
)}
</div>
</div>
</div>
</div>
{/* 详情内容 */}
{/* Details content */}
<ScrollArea className="flex-1">
<div className="p-6 space-y-6">
{/* 统计信息 */}
{/* Statistics information */}
<div className="rounded-lg border">
<div className="grid grid-cols-2 divide-x">
<div className="p-4">
<div className="text-xs text-muted-foreground"></div>
<div className="text-xs text-muted-foreground">{t("status")}</div>
<div className="text-lg font-semibold mt-1">
{selectedRepo.lastSyncedAt ? "已同步" : "未同步"}
{selectedRepo.lastSyncedAt ? t("synced") : t("notSynced")}
</div>
</div>
<div className="p-4">
<div className="text-xs text-muted-foreground"></div>
<div className="text-xs text-muted-foreground">{t("lastSync")}</div>
<div className="text-lg font-semibold mt-1">
{selectedRepo.lastSyncedAt
? new Date(selectedRepo.lastSyncedAt).toLocaleString("zh-CN")
? formatDateTime(selectedRepo.lastSyncedAt, locale)
: "-"}
</div>
</div>
@@ -287,14 +296,14 @@ export default function NucleiReposPage() {
<Separator />
<div className="p-4 space-y-3">
<div className="text-sm">
<span className="text-muted-foreground">Git </span>
<span className="text-muted-foreground">{t("gitUrl")}</span>
<div className="font-mono text-xs mt-1 break-all bg-muted p-2 rounded">
{selectedRepo.repoUrl}
</div>
</div>
{selectedRepo.localPath && (
<div className="text-sm">
<span className="text-muted-foreground"></span>
<span className="text-muted-foreground">{t("localPath")}</span>
<div className="font-mono text-xs mt-1 break-all bg-muted p-2 rounded">
{selectedRepo.localPath}
</div>
@@ -302,7 +311,7 @@ export default function NucleiReposPage() {
)}
{selectedRepo.commitHash && (
<div className="text-sm">
<span className="text-muted-foreground">Commit</span>
<span className="text-muted-foreground">{t("commit")}</span>
<div className="font-mono text-xs mt-1 break-all bg-muted p-2 rounded">
{selectedRepo.commitHash}
</div>
@@ -313,7 +322,7 @@ export default function NucleiReposPage() {
</div>
</ScrollArea>
{/* 操作按钮 */}
{/* Action buttons */}
<div className="px-6 py-4 border-t flex items-center gap-2">
<Button
variant="outline"
@@ -322,7 +331,7 @@ export default function NucleiReposPage() {
disabled={refreshMutation.isPending}
>
<RefreshCw className={cn("h-4 w-4 mr-1.5", refreshMutation.isPending && "animate-spin")} />
{refreshMutation.isPending ? "同步中..." : "同步仓库"}
{refreshMutation.isPending ? t("syncing") : t("syncRepo")}
</Button>
<Button
variant="outline"
@@ -330,12 +339,12 @@ export default function NucleiReposPage() {
onClick={() => openEditDialog(selectedRepo)}
>
<Settings className="h-4 w-4 mr-1.5" />
{t("editConfig")}
</Button>
<Link href={`/tools/nuclei/${selectedRepo.id}/`}>
<Button size="sm">
<FolderOpen className="h-4 w-4 mr-1.5" />
{t("manageTemplates")}
</Button>
</Link>
<div className="flex-1" />
@@ -347,16 +356,16 @@ export default function NucleiReposPage() {
disabled={deleteMutation.isPending}
>
<Trash2 className="h-4 w-4 mr-1.5" />
{t("delete")}
</Button>
</div>
</>
) : (
// 未选中状态
// Unselected state
<div className="flex-1 flex items-center justify-center">
<div className="text-center text-muted-foreground">
<GitBranch className="h-12 w-12 mx-auto mb-3 opacity-50" />
<p className="text-sm"></p>
<p className="text-sm">{t("selectHint")}</p>
</div>
</div>
)}
@@ -371,32 +380,32 @@ export default function NucleiReposPage() {
}}>
<DialogContent className="sm:max-w-md">
<DialogHeader>
<DialogTitle> Nuclei </DialogTitle>
<DialogTitle>{t("addDialog.title")}</DialogTitle>
</DialogHeader>
<form className="space-y-4" onSubmit={handleCreateSubmit}>
<div className="space-y-2">
<Label htmlFor="nuclei-repo-name"></Label>
<Label htmlFor="nuclei-repo-name">{t("addDialog.repoName")}</Label>
<Input
id="nuclei-repo-name"
type="text"
placeholder="例如:默认 Nuclei 官方模板"
placeholder={t("addDialog.repoNamePlaceholder")}
value={newName}
onChange={(event) => setNewName(event.target.value)}
/>
</div>
<div className="space-y-2">
<Label htmlFor="nuclei-repo-url">Git </Label>
<Label htmlFor="nuclei-repo-url">{t("addDialog.gitUrl")}</Label>
<Input
id="nuclei-repo-url"
type="text"
placeholder="例如https://github.com/projectdiscovery/nuclei-templates.git"
placeholder={t("addDialog.gitUrlPlaceholder")}
value={newRepoUrl}
onChange={(event) => setNewRepoUrl(event.target.value)}
/>
</div>
{/* 目前只支持公开仓库,这里不再提供认证方式和凭据配置 */}
{/* Currently only public repositories are supported, no authentication method and credential configuration provided here */}
<DialogFooter>
<Button
@@ -405,13 +414,13 @@ export default function NucleiReposPage() {
onClick={() => setCreateDialogOpen(false)}
disabled={createMutation.isPending}
>
{t("addDialog.cancel")}
</Button>
<Button
type="submit"
disabled={!newName.trim() || !newRepoUrl.trim() || createMutation.isPending}
>
{createMutation.isPending ? "创建中..." : "确认新增"}
{createMutation.isPending ? t("addDialog.creating") : t("addDialog.confirm")}
</Button>
</DialogFooter>
</form>
@@ -429,26 +438,26 @@ export default function NucleiReposPage() {
>
<DialogContent className="sm:max-w-md">
<DialogHeader>
<DialogTitle> Nuclei </DialogTitle>
<DialogTitle>{t("editDialog.title")}</DialogTitle>
</DialogHeader>
<form className="space-y-4" onSubmit={handleEditSubmit}>
<div className="space-y-1 text-sm text-muted-foreground">
<span className="font-medium"></span>
<span className="font-medium">{t("editDialog.repoName")}</span>
<span>{editingRepo?.name ?? ""}</span>
</div>
<div className="space-y-2">
<Label htmlFor="edit-nuclei-repo-url">Git </Label>
<Label htmlFor="edit-nuclei-repo-url">{t("editDialog.gitUrl")}</Label>
<Input
id="edit-nuclei-repo-url"
type="text"
placeholder="例如https://github.com/projectdiscovery/nuclei-templates.git"
placeholder={t("editDialog.gitUrlPlaceholder")}
value={editRepoUrl}
onChange={(event) => setEditRepoUrl(event.target.value)}
/>
</div>
{/* 编辑时也不再支持配置认证方式/凭据,仅允许修改 Git 地址 */}
{/* Editing also no longer supports configuring authentication method/credentials, only allows modifying Git address */}
<DialogFooter>
<Button
@@ -457,36 +466,36 @@ export default function NucleiReposPage() {
onClick={() => setEditDialogOpen(false)}
disabled={updateMutation.isPending}
>
{t("editDialog.cancel")}
</Button>
<Button
type="submit"
disabled={!editRepoUrl.trim() || updateMutation.isPending}
>
{updateMutation.isPending ? "保存中..." : "保存配置"}
{updateMutation.isPending ? t("editDialog.saving") : t("editDialog.save")}
</Button>
</DialogFooter>
</form>
</DialogContent>
</Dialog>
{/* 删除确认弹窗 */}
{/* Delete confirmation dialog */}
<AlertDialog open={deleteDialogOpen} onOpenChange={setDeleteDialogOpen}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle></AlertDialogTitle>
<AlertDialogTitle>{tConfirm("deleteTitle")}</AlertDialogTitle>
<AlertDialogDescription>
{repoToDelete?.name}
{tConfirm("deleteNucleiRepoMessage", { name: repoToDelete?.name ?? "" })}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel></AlertDialogCancel>
<AlertDialogCancel>{tCommon("actions.cancel")}</AlertDialogCancel>
<AlertDialogAction
onClick={confirmDelete}
className="bg-destructive text-destructive-foreground hover:bg-destructive/90"
disabled={deleteMutation.isPending}
>
{deleteMutation.isPending ? "删除中..." : "删除"}
{deleteMutation.isPending ? tConfirm("deleting") : tCommon("actions.delete")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>

View File

@@ -1,18 +1,24 @@
"use client"
import { Button } from "@/components/ui/button"
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card"
import { PackageOpen, Settings, ArrowRight } from "lucide-react"
import Link from "next/link"
import { useTranslations } from "next-intl"
/**
*
*
* Tools overview page
* Displays entry points for open source tools and custom tools
*/
export default function ToolsPage() {
// 功能模块
const t = useTranslations("pages.tools")
const tCommon = useTranslations("common")
// Feature modules
const modules = [
{
title: "字典管理",
description: "管理目录扫描等使用的字典文件",
title: t("wordlists.title"),
description: t("wordlists.description"),
href: "/tools/wordlists/",
icon: PackageOpen,
status: "available",
@@ -22,8 +28,8 @@ export default function ToolsPage() {
},
},
{
title: "Nuclei 模板",
description: "浏览本地 Nuclei 模板结构及内容",
title: t("nuclei.title"),
description: t("nuclei.description"),
href: "/tools/nuclei/",
icon: Settings,
status: "available",
@@ -36,17 +42,17 @@ export default function ToolsPage() {
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 页面头部 */}
{/* Page header */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div>
<h2 className="text-2xl font-bold tracking-tight"></h2>
<h2 className="text-2xl font-bold tracking-tight">{t("title")}</h2>
<p className="text-muted-foreground">
{t("description")}
</p>
</div>
</div>
{/* 统计卡片 */}
{/* Statistics cards */}
<div className="px-4 lg:px-6">
<div className="grid gap-4 md:grid-cols-2">
{modules.map((module) => (
@@ -59,7 +65,7 @@ export default function ToolsPage() {
</div>
{module.status === "coming-soon" && (
<span className="text-xs bg-yellow-100 text-yellow-800 px-2 py-1 rounded-full">
线
{t("comingSoon")}
</span>
)}
</div>
@@ -67,29 +73,29 @@ export default function ToolsPage() {
</CardHeader>
<CardContent>
<div className="space-y-4">
{/* 统计信息 */}
{/* Statistics information */}
<div className="flex items-center gap-6 text-sm">
<div>
<span className="text-muted-foreground"></span>
<span className="text-muted-foreground">{t("stats.total")}</span>
<span className="font-semibold ml-1">{module.stats.total}</span>
</div>
<div>
<span className="text-muted-foreground"></span>
<span className="text-muted-foreground">{t("stats.active")}</span>
<span className="font-semibold ml-1 text-green-600">{module.stats.active}</span>
</div>
</div>
{/* 操作按钮 */}
{/* Action buttons */}
{module.status === "available" ? (
<Link href={module.href}>
<Button className="w-full">
{t("enterManagement")}
<ArrowRight className="h-4 w-4" />
</Button>
</Link>
) : (
<Button disabled className="w-full">
{t("comingSoon")}
</Button>
)}
</div>
@@ -99,13 +105,13 @@ export default function ToolsPage() {
</div>
</div>
{/* 快速操作 */}
{/* Quick actions */}
<div className="px-4 lg:px-6">
<Card>
<CardHeader>
<CardTitle></CardTitle>
<CardTitle>{t("quickActions.title")}</CardTitle>
<CardDescription>
{t("quickActions.description")}
</CardDescription>
</CardHeader>
<CardContent>
@@ -113,7 +119,7 @@ export default function ToolsPage() {
<Link href="/tools/wordlists/">
<Button variant="outline" size="sm">
<PackageOpen className="h-4 w-4" />
{t("wordlists.title")}
</Button>
</Link>
</div>

View File

@@ -1,7 +1,8 @@
"use client"
import { useState, useMemo } from "react"
import { FileText, Search, Copy, Download, Trash2, Pencil } from "lucide-react"
import { FileText, Search, Trash2, Pencil } from "lucide-react"
import { useTranslations, useLocale } from "next-intl"
import { Button } from "@/components/ui/button"
import { Input } from "@/components/ui/input"
import { ScrollArea } from "@/components/ui/scroll-area"
@@ -23,6 +24,7 @@ import { toast } from "sonner"
import { cn } from "@/lib/utils"
import type { Wordlist } from "@/types/wordlist.types"
import { MasterDetailSkeleton } from "@/components/ui/master-detail-skeleton"
import { getDateLocale } from "@/lib/date-utils"
export default function WordlistsPage() {
const [selectedId, setSelectedId] = useState<number | null>(null)
@@ -32,10 +34,17 @@ export default function WordlistsPage() {
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false)
const [wordlistToDelete, setWordlistToDelete] = useState<Wordlist | null>(null)
const { data, isLoading } = useWordlists({ page: 1, pageSize: 100 })
// Internationalization
const tCommon = useTranslations("common")
const tConfirm = useTranslations("common.confirm")
const tNav = useTranslations("navigation")
const t = useTranslations("pages.wordlists")
const locale = useLocale()
const { data, isLoading } = useWordlists({ page: 1, pageSize: 1000 })
const deleteMutation = useDeleteWordlist()
// 过滤字典列表
// Filter wordlist list
const filteredWordlists = useMemo(() => {
if (!data?.results) return []
if (!searchQuery.trim()) return data.results
@@ -47,7 +56,7 @@ export default function WordlistsPage() {
)
}, [data?.results, searchQuery])
// 选中的字典
// Selected wordlist
const selectedWordlist = useMemo(() => {
if (!selectedId || !data?.results) return null
return data.results.find((w) => w.id === selectedId) || null
@@ -60,7 +69,7 @@ export default function WordlistsPage() {
const handleCopyId = (id: number) => {
navigator.clipboard.writeText(String(id))
toast.success("ID 已复制到剪贴板")
toast.success(t("idCopied"))
}
const handleDelete = (wordlist: Wordlist) => {
@@ -88,21 +97,21 @@ export default function WordlistsPage() {
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`
}
// 加载状态
// Loading state
if (isLoading) {
return <MasterDetailSkeleton title="字典管理" listItemCount={5} />
return <MasterDetailSkeleton title={tNav("wordlists")} listItemCount={5} />
}
return (
<div className="flex flex-col h-full">
{/* 顶部:标题 + 搜索 + 上传按钮 */}
{/* Top: Title + Search + Upload button */}
<div className="flex items-center justify-between gap-4 px-4 py-4 lg:px-6">
<h1 className="text-2xl font-bold shrink-0"></h1>
<h1 className="text-2xl font-bold shrink-0">{t("title")}</h1>
<div className="flex items-center gap-2 flex-1 max-w-md">
<div className="relative flex-1">
<Search className="absolute left-2.5 top-1/2 h-4 w-4 -translate-y-1/2 text-muted-foreground" />
<Input
placeholder="搜索字典..."
placeholder={t("searchPlaceholder")}
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
className="pl-8"
@@ -114,21 +123,21 @@ export default function WordlistsPage() {
<Separator />
{/* 主体:左侧列表 + 右侧详情 */}
{/* Main: Left list + Right details */}
<div className="flex flex-1 min-h-0">
{/* 左侧:字典列表 */}
{/* Left: Wordlist list */}
<div className="w-72 lg:w-80 border-r flex flex-col">
<div className="px-4 py-3 border-b">
<h2 className="text-sm font-medium text-muted-foreground">
({filteredWordlists.length})
{t("listTitle")} ({filteredWordlists.length})
</h2>
</div>
<ScrollArea className="flex-1">
{isLoading ? (
<div className="p-4 text-sm text-muted-foreground">...</div>
<div className="p-4 text-sm text-muted-foreground">{t("loading")}</div>
) : filteredWordlists.length === 0 ? (
<div className="p-4 text-sm text-muted-foreground">
{searchQuery ? "未找到匹配的字典" : "暂无字典,请先上传"}
{searchQuery ? t("noMatch") : t("noData")}
</div>
) : (
<div className="p-2">
@@ -147,7 +156,7 @@ export default function WordlistsPage() {
{wordlist.name}
</div>
<div className="text-xs text-muted-foreground mt-0.5">
{wordlist.lineCount?.toLocaleString() ?? "-"} · {formatFileSize(wordlist.fileSize)}
{wordlist.lineCount?.toLocaleString() ?? "-"} {t("lines")} · {formatFileSize(wordlist.fileSize)}
</div>
</button>
))}
@@ -156,11 +165,11 @@ export default function WordlistsPage() {
</ScrollArea>
</div>
{/* 右侧:字典详情 */}
{/* Right: Wordlist details */}
<div className="flex-1 flex flex-col min-w-0">
{selectedWordlist ? (
<>
{/* 详情头部 */}
{/* Details header */}
<div className="px-6 py-4 border-b">
<div className="flex items-start gap-3">
<div className="flex h-10 w-10 items-center justify-center rounded-lg bg-primary/10 shrink-0">
@@ -179,20 +188,20 @@ export default function WordlistsPage() {
</div>
</div>
{/* 详情内容 */}
{/* Details content */}
<ScrollArea className="flex-1">
<div className="p-6 space-y-6">
{/* 基本信息 */}
{/* Basic information */}
<div className="rounded-lg border">
<div className="grid grid-cols-2 divide-x">
<div className="p-4">
<div className="text-xs text-muted-foreground"></div>
<div className="text-xs text-muted-foreground">{t("rows")}</div>
<div className="text-lg font-semibold mt-1">
{selectedWordlist.lineCount?.toLocaleString() ?? "-"}
</div>
</div>
<div className="p-4">
<div className="text-xs text-muted-foreground"></div>
<div className="text-xs text-muted-foreground">{t("size")}</div>
<div className="text-lg font-semibold mt-1">
{formatFileSize(selectedWordlist.fileSize)}
</div>
@@ -201,18 +210,18 @@ export default function WordlistsPage() {
<Separator />
<div className="p-4 space-y-3">
<div className="flex justify-between text-sm">
<span className="text-muted-foreground">ID</span>
<span className="text-muted-foreground">{t("id")}</span>
<span className="font-mono">{selectedWordlist.id}</span>
</div>
<div className="flex justify-between text-sm">
<span className="text-muted-foreground"></span>
<span className="text-muted-foreground">{t("updatedAt")}</span>
<span>
{new Date(selectedWordlist.updatedAt).toLocaleString("zh-CN")}
{new Date(selectedWordlist.updatedAt).toLocaleString(getDateLocale(locale))}
</span>
</div>
{selectedWordlist.fileHash && (
<div className="text-sm">
<span className="text-muted-foreground">Hash</span>
<span className="text-muted-foreground">{t("hash")}</span>
<div className="font-mono text-xs mt-1 break-all bg-muted p-2 rounded">
{selectedWordlist.fileHash}
</div>
@@ -223,7 +232,7 @@ export default function WordlistsPage() {
</div>
</ScrollArea>
{/* 操作按钮 */}
{/* Action buttons */}
<div className="px-6 py-4 border-t flex items-center gap-2">
<Button
variant="outline"
@@ -231,7 +240,7 @@ export default function WordlistsPage() {
onClick={() => handleEdit(selectedWordlist)}
>
<Pencil className="h-4 w-4 mr-1.5" />
{t("editContent")}
</Button>
<div className="flex-1" />
<Button
@@ -242,46 +251,46 @@ export default function WordlistsPage() {
disabled={deleteMutation.isPending}
>
<Trash2 className="h-4 w-4 mr-1.5" />
{t("delete")}
</Button>
</div>
</>
) : (
// 未选中状态
// Unselected state
<div className="flex-1 flex items-center justify-center">
<div className="text-center text-muted-foreground">
<FileText className="h-12 w-12 mx-auto mb-3 opacity-50" />
<p className="text-sm"></p>
<p className="text-sm">{t("selectHint")}</p>
</div>
</div>
)}
</div>
</div>
{/* 编辑弹窗 */}
{/* Edit dialog */}
<WordlistEditDialog
wordlist={editingWordlist}
open={isEditDialogOpen}
onOpenChange={setIsEditDialogOpen}
/>
{/* 删除确认弹窗 */}
{/* Delete confirmation dialog */}
<AlertDialog open={deleteDialogOpen} onOpenChange={setDeleteDialogOpen}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle></AlertDialogTitle>
<AlertDialogTitle>{tConfirm("deleteTitle")}</AlertDialogTitle>
<AlertDialogDescription>
{wordlistToDelete?.name}
{tConfirm("deleteWordlistMessage", { name: wordlistToDelete?.name ?? "" })}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel></AlertDialogCancel>
<AlertDialogCancel>{tCommon("actions.cancel")}</AlertDialogCancel>
<AlertDialogAction
onClick={confirmDelete}
className="bg-destructive text-destructive-foreground hover:bg-destructive/90"
disabled={deleteMutation.isPending}
>
{deleteMutation.isPending ? "删除中..." : "删除"}
{deleteMutation.isPending ? tConfirm("deleting") : tCommon("actions.delete")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>

View File

@@ -1,24 +1,27 @@
"use client"
import React from "react"
import { useTranslations } from "next-intl"
import { VulnerabilitiesDetailView } from "@/components/vulnerabilities"
/**
*
*
* All vulnerabilities page
* Displays all vulnerabilities in the system
*/
export default function VulnerabilitiesPage() {
const t = useTranslations("vulnerabilities")
return (
<div className="flex flex-col gap-4 py-4 md:gap-6 md:py-6">
{/* 页面头部 */}
{/* Page header */}
<div className="flex items-center justify-between px-4 lg:px-6">
<div>
<h2 className="text-2xl font-bold tracking-tight"></h2>
<p className="text-muted-foreground"></p>
<h2 className="text-2xl font-bold tracking-tight">{t("title")}</h2>
<p className="text-muted-foreground">{t("description")}</p>
</div>
</div>
{/* 漏洞列表 */}
{/* Vulnerability list */}
<div className="px-4 lg:px-6">
<VulnerabilitiesDetailView />
</div>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

View File

@@ -1,6 +1,7 @@
@import "tailwindcss";
@import "tw-animate-css";
@import "@xterm/xterm/css/xterm.css";
@import "../styles/themes/index.css";
@custom-variant dark (&:is(.dark *));
@@ -43,7 +44,6 @@
--font-sans: 'Noto Sans SC', system-ui, -apple-system, PingFang SC, sans-serif;
--font-mono: 'JetBrains Mono', 'Fira Code', Consolas, monospace;
--font-serif: Georgia, 'Noto Serif SC', serif;
--radius: 0.625rem;
--tracking-tighter: calc(var(--tracking-normal) - 0.05em);
--tracking-tight: calc(var(--tracking-normal) - 0.025em);
--tracking-wide: calc(var(--tracking-normal) + 0.025em);
@@ -245,6 +245,51 @@
/* Chrome, Safari and Opera */
}
/* 性能优化:长列表渲染优化 - content-visibility */
.tree-node-item {
content-visibility: auto;
contain-intrinsic-size: 0 36px;
}
}
/* 登录页背景 - 使用主题色适配亮暗模式 */
.login-bg {
position: relative;
background-color: var(--background);
}
.login-bg::before {
content: '';
position: absolute;
inset: 0;
background-color: var(--primary);
opacity: 0.04;
mask-image: url("data:image/svg+xml,%3Csvg width='60' height='60' viewBox='0 0 60 60' xmlns='http://www.w3.org/2000/svg'%3E%3Cg fill='none' fill-rule='evenodd'%3E%3Cg fill='%23000' fill-opacity='1'%3E%3Cpath d='M36 34v-4h-2v4h-4v2h4v4h2v-4h4v-2h-4zm0-30V0h-2v4h-4v2h4v4h2V6h4V4h-4zM6 34v-4H4v4H0v2h4v4h2v-4h4v-2H6zM6 4V0H4v4H0v2h4v4h2V6h4V4H6z'/%3E%3C/g%3E%3C/g%3E%3C/svg%3E");
-webkit-mask-image: url("data:image/svg+xml,%3Csvg width='60' height='60' viewBox='0 0 60 60' xmlns='http://www.w3.org/2000/svg'%3E%3Cg fill='none' fill-rule='evenodd'%3E%3Cg fill='%23000' fill-opacity='1'%3E%3Cpath d='M36 34v-4h-2v4h-4v2h4v4h2v-4h4v-2h-4zm0-30V0h-2v4h-4v2h4v4h2V6h4V4h-4zM6 34v-4H4v4H0v2h4v4h2v-4h4v-2H6zM6 4V0H4v4H0v2h4v4h2V6h4V4H6z'/%3E%3C/g%3E%3C/g%3E%3C/svg%3E");
mask-size: 60px 60px;
-webkit-mask-size: 60px 60px;
pointer-events: none;
z-index: 0;
}
.login-bg > * {
position: relative;
z-index: 1;
}
/* 终端光标闪烁动画 */
@keyframes blink {
0%, 50% {
opacity: 1;
}
51%, 100% {
opacity: 0;
}
}
.animate-blink {
animation: blink 1s step-end infinite;
}
/* 通知铃铛摇晃动画 */
@@ -292,4 +337,256 @@
);
background-size: 1rem 1rem;
animation: progress-stripes 1s linear infinite;
}
}
/* 闪电闪烁动画 - 快速扫描按钮 */
@keyframes flash {
0%, 90%, 100% {
opacity: 1;
transform: scale(1);
filter: drop-shadow(0 0 2px rgba(250, 204, 21, 0.4));
}
93% {
opacity: 1;
transform: scale(1.3);
filter: drop-shadow(0 0 8px rgba(250, 204, 21, 0.8));
}
96% {
opacity: 0.6;
transform: scale(1);
filter: drop-shadow(0 0 2px rgba(250, 204, 21, 0.4));
}
}
/* 按钮整体发光动画 */
@keyframes glow {
0%, 85%, 100% {
box-shadow: 0 0 0 transparent;
}
90% {
box-shadow: 0 0 12px oklch(from var(--primary) l c h / 0.5), 0 0 24px oklch(from var(--primary) l c h / 0.3);
}
95% {
box-shadow: 0 0 4px oklch(from var(--primary) l c h / 0.2);
}
}
.animate-glow {
animation: glow 3s ease-in-out infinite;
}
/* 边框流光动画 */
@keyframes border-flow {
0% {
transform: translateX(-100%) rotate(0deg);
}
100% {
transform: translateX(100%) rotate(0deg);
}
}
.animate-border-flow {
animation: border-flow 2s linear infinite;
}
/* Dashboard 淡入动画 - 纯 CSS 实现,避免 hydration mismatch */
@keyframes dashboard-fade-in {
from {
opacity: 0;
filter: blur(4px);
}
to {
opacity: 1;
filter: blur(0);
}
}
.animate-dashboard-fade-in {
animation: dashboard-fade-in 500ms ease-out forwards;
}
/* 登录页 - Glitch Reveal全屏开场 - 增强版赛博朋克风格 */
@keyframes orbit-splash-jitter {
0%,
100% {
transform: translate3d(0, 0, 0);
filter: none;
}
10% {
transform: translate3d(-2px, 0, 0);
}
20% {
transform: translate3d(2px, -1px, 0);
filter: hue-rotate(10deg);
}
30% {
transform: translate3d(-1px, 1px, 0);
}
45% {
transform: translate3d(1px, 0, 0);
filter: hue-rotate(-10deg);
}
60% {
transform: translate3d(0, -1px, 0);
}
75% {
transform: translate3d(1px, 1px, 0);
}
}
@keyframes orbit-splash-noise {
0% {
transform: translate3d(-2%, -2%, 0);
opacity: 0.22;
}
25% {
transform: translate3d(2%, -1%, 0);
opacity: 0.28;
}
50% {
transform: translate3d(-1%, 2%, 0);
opacity: 0.24;
}
75% {
transform: translate3d(1%, 1%, 0);
opacity: 0.30;
}
100% {
transform: translate3d(-2%, -2%, 0);
opacity: 0.22;
}
}
@keyframes orbit-splash-sweep {
0% {
transform: translate3d(0, -120%, 0);
opacity: 0;
}
18% {
opacity: 0.35;
}
100% {
transform: translate3d(0, 120%, 0);
opacity: 0;
}
}
@keyframes orbit-glitch-clip {
0% {
clip-path: inset(0 0 0 0);
transform: translate3d(0, 0, 0);
}
16% {
clip-path: inset(12% 0 72% 0);
transform: translate3d(-2px, 0, 0);
}
32% {
clip-path: inset(54% 0 18% 0);
transform: translate3d(2px, 0, 0);
}
48% {
clip-path: inset(78% 0 6% 0);
transform: translate3d(-1px, 0, 0);
}
64% {
clip-path: inset(30% 0 48% 0);
transform: translate3d(1px, 0, 0);
}
80% {
clip-path: inset(6% 0 86% 0);
transform: translate3d(0, 0, 0);
}
100% {
clip-path: inset(0 0 0 0);
transform: translate3d(0, 0, 0);
}
}
.orbit-splash-glitch {
isolation: isolate;
animation: orbit-splash-jitter 0.5s steps(2, end) infinite;
}
.orbit-splash-glitch::before {
content: "";
position: absolute;
inset: -20%;
pointer-events: none;
z-index: 20;
mix-blend-mode: screen;
background-image:
repeating-linear-gradient(
0deg,
rgba(255, 255, 255, 0.08) 0px,
rgba(255, 255, 255, 0.08) 1px,
transparent 1px,
transparent 4px
),
repeating-linear-gradient(
90deg,
rgba(255, 16, 240, 0.15) 0px,
rgba(255, 16, 240, 0.15) 1px,
transparent 1px,
transparent 84px
),
repeating-linear-gradient(
45deg,
rgba(176, 38, 255, 0.08) 0px,
rgba(176, 38, 255, 0.08) 1px,
transparent 1px,
transparent 9px
);
animation: orbit-splash-noise 0.5s steps(2, end) infinite;
}
.orbit-splash-glitch::after {
content: "";
position: absolute;
inset: 0;
pointer-events: none;
z-index: 20;
background: linear-gradient(
180deg,
transparent 0%,
rgba(255, 16, 240, 0.18) 50%,
transparent 100%
);
opacity: 0;
animation: orbit-splash-sweep 0.5s ease-out both;
}
.orbit-glitch-text {
position: relative;
display: inline-block;
text-shadow: 0 0 20px rgba(255, 16, 240, 0.4), 0 0 40px rgba(255, 16, 240, 0.2);
}
.orbit-glitch-text::before,
.orbit-glitch-text::after {
content: attr(data-text);
position: absolute;
inset: 0;
pointer-events: none;
}
.orbit-glitch-text::before {
color: rgba(255, 16, 240, 0.85);
transform: translate3d(-2px, 0, 0);
animation: orbit-glitch-clip 0.5s steps(2, end) infinite;
}
.orbit-glitch-text::after {
color: rgba(176, 38, 255, 0.75);
transform: translate3d(2px, 0, 0);
animation: orbit-glitch-clip 0.5s steps(2, end) infinite reverse;
}
@media (prefers-reduced-motion: reduce) {
.orbit-splash-glitch,
.orbit-splash-glitch::before,
.orbit-splash-glitch::after,
.orbit-glitch-text::before,
.orbit-glitch-text::after {
animation: none !important;
}
}

Some files were not shown because too many files have changed in this diff Show More