Compare commits

...

57 Commits

Author SHA1 Message Date
yyhuni
ce8cebf11d chore(frontend): update pnpm-lock.yaml with @radix-ui/react-hover-card
- Add @radix-ui/react-hover-card@1.1.15 package resolution entry
- Add package snapshot with all required dependencies and peer dependencies
- Update lock file to reflect new hover card component dependency
- Ensures consistent dependency management across the frontend environment
2026-01-08 07:57:58 +08:00
yyhuni
ec006d8f54 chore(frontend): add @radix-ui/react-hover-card dependency
- Add @radix-ui/react-hover-card v1.1.6 to project dependencies
- Enables hover card UI component functionality for improved user interactions
- Maintains consistency with existing Radix UI component library usage
2026-01-08 07:56:07 +08:00
yyhuni
48976a570f docs: update README with screenshot feature and sponsorship info
- Add screenshot feature documentation to features section with Playwright details
- Include WebP format compression benefits and multi-source URL support
- Add screenshot stage to scan flow architecture diagram with styling
- Add fingerprint library table with counts for public distribution
- Add sponsorship section with WeChat Pay and Alipay QR codes
- Add sponsor appreciation table
- Update frontend dependencies with @radix-ui/react-visually-hidden package
- Remove redundant installation speed note from mirror parameter documentation
- Clean up demo link formatting in online demo section
2026-01-08 07:54:31 +08:00
yyhuni
5da7229873 feat(scan-overview): add yaml configuration tab and improve logs layout
- Add yaml_configuration field to ScanHistorySerializer for backend exposure
- Implement tabbed interface with Logs and Configuration tabs in scan overview
- Add YamlEditor component to display scan configuration in read-only mode
- Refactor logs section to show status bar only when logs tab is active
- Move auto-refresh toggle to logs tab header for better UX
- Add padding to stage progress items for improved visual alignment
- Add internationalization strings for new UI elements (en and zh)
- Update ScanHistory type to include yamlConfiguration field
- Improve tab switching state management with activeTab state
2026-01-08 07:31:54 +08:00
yyhuni
8bb737a9fa feat(scan-history): add auto-refresh toggle and improve layout
- Add auto-refresh toggle switch to scan logs section for manual control
- Implement flexible polling based on auto-refresh state and scan status
- Restructure scan overview layout to use left-right split (stages + logs)
- Move stage progress to left column with vulnerability statistics
- Implement scrollable logs panel on right side with proper height constraints
- Update component imports to use Switch and Label instead of Button
- Add full-height flex layout to parent containers for proper scrolling
- Refactor grid layout from 2-column to fixed-width left + flexible right
- Update translations for new UI elements and labels
- Improve responsive design with better flex constraints and min-height handling
2026-01-07 23:30:27 +08:00
yyhuni
2d018d33f3 优化扫描历史详细页面 2026-01-07 22:44:46 +08:00
yyhuni
0c07cc8497 refactor(scan-flows): simplify logger calls by splitting multiline strings
- Split multiline logger.info() calls into separate single-line calls in initiate_scan_flow.py
- Improved log readability by removing string concatenation with newlines and separators
- Refactored 6 logger.info() calls across sequential, parallel, and completion stages
- Updated subdomain_discovery_flow.py to use consistent single-line logger pattern
- Maintains same log output while improving code maintainability and consistency
2026-01-07 22:21:50 +08:00
yyhuni
225b039985 style(system-logs): adjust log level filter dropdown width
- Increase SelectTrigger width from 100px to 130px for better label visibility
- Improve UI consistency in log toolbar component
- Prevent text truncation in log level filter dropdown
2026-01-07 22:17:07 +08:00
yyhuni
d1624627bc 一级tab加图标 2026-01-07 22:14:42 +08:00
yyhuni
7bb15e4ae4 增加:截图功能 2026-01-07 22:10:51 +08:00
github-actions[bot]
8e8cc29669 chore: bump version to v1.4.1 2026-01-07 01:33:29 +00:00
yyhuni
d6d5338acb 增加资产删除功能 2026-01-07 09:29:31 +08:00
yyhuni
c521bdb511 重构:回退逻辑 2026-01-07 08:45:27 +08:00
yyhuni
abf2d95f6f feat(targets): increase max batch size for target creation from 1000 to 5000
- Update MAX_BATCH_SIZE constant in BatchCreateTargetSerializer from 1000 to 5000
- Increase batch creation limit to support larger bulk operations
- Update documentation comment to reflect new limit
- Allows users to create up to 5000 targets in a single batch operation
2026-01-06 20:39:31 +08:00
github-actions[bot]
ab58cf0d85 chore: bump version to v1.4.0 2026-01-06 09:31:29 +00:00
yyhuni
fb0111adf2 Merge branch 'dev' 2026-01-06 17:27:35 +08:00
yyhuni
161ee9a2b1 Merge branch 'dev' 2026-01-06 17:27:16 +08:00
yyhuni
0cf75585d5 docs: 添加黑名单过滤功能说明到 README 2026-01-06 17:25:31 +08:00
yyhuni
1d8d5f51d9 feat(blacklist): add mock data and service integration for blacklist management
- Create new blacklist mock data module with global and target-specific patterns
- Add mock functions for getting and updating global blacklist rules
- Add mock functions for getting and updating target-specific blacklist rules
- Integrate mock blacklist endpoints into global-blacklist.service.ts
- Integrate mock blacklist endpoints into target.service.ts
- Export blacklist mock functions from main mock index
- Enable testing of blacklist management UI without backend API
2026-01-06 17:08:51 +08:00
github-actions[bot]
3f8de07c8c chore: bump version to v1.4.0-dev 2026-01-06 09:02:31 +00:00
yyhuni
cd5c2b9f11 chore(notifications): remove test notification endpoint
- Remove test notification route from URL patterns
- Delete notifications_test view function and associated logic
- Clean up unused test endpoint that was used for development purposes
- Simplify notification API surface by removing non-production code
2026-01-06 16:57:29 +08:00
yyhuni
54786c22dd feat(scan): increase max batch size for quick scan operations
- Increase MAX_BATCH_SIZE from 1000 to 5000 in QuickScanSerializer
- Allows processing of larger batch scans in a single operation
- Improves throughput for bulk scanning workflows
2026-01-06 16:55:28 +08:00
yyhuni
d468f975ab feat(scan): implement fallback chain for endpoint URL export
- Add fallback chain for URL data sources: Endpoint → WebSite → default generation
- Import WebSite model and Path utility for enhanced file handling
- Create output directory automatically if it doesn't exist
- Add "source" field to return value indicating data origin (endpoint/website/default)
- Update docstring to document the three-tier fallback priority system
- Implement sequential export attempts with logging at each fallback stage
- Improve error handling and data source transparency for endpoint exports
2026-01-06 16:30:42 +08:00
yyhuni
a85a12b8ad feat(asset): create incremental materialized views for asset search
- Add pg_ivm extension for incremental materialized view maintenance
- Create asset_search_view for Website model with optimized columns for full-text search
- Create endpoint_search_view for Endpoint model with matching search schema
- Add database indexes on host, url, title, status_code, and created_at columns for both views
- Enable high-performance asset search queries with automatic view maintenance
2026-01-06 16:22:24 +08:00
yyhuni
a8b0d97b7b feat(targets): update navigation routes and enhance add button UI
- Change target detail navigation route from `/website/` to `/overview/`
- Update TargetNameCell click handler to use new overview route
- Update TargetRowActions onView handler to use new overview route
- Add IconPlus icon import from @tabler/icons-react
- Add icon to create target button for improved visual clarity
- Improves navigation consistency and button affordance in targets table
2026-01-06 16:14:54 +08:00
yyhuni
b8504921c2 feat(fingerprints): add JSONL format support for Goby fingerprint imports
- Add support for JSONL format parsing in addition to standard JSON for Goby fingerprints
- Update GobyFingerprintService to validate both standard format (name/logic/rule) and JSONL format (product/rule)
- Implement _parse_json_content() method to handle both JSON and JSONL file formats with proper error handling
- Add JSONL parsing logic in frontend import dialog with per-line validation and error reporting
- Update file import endpoint documentation to indicate JSONL format support
- Improve error messages for encoding and parsing failures to aid user debugging
- Enable seamless import of Goby fingerprint data from multiple source formats
2026-01-06 16:10:14 +08:00
yyhuni
ecfc1822fb style(target): update vulnerability icon color to muted foreground
- Change ShieldAlert icon color from red-500 to muted-foreground in target overview
- Improves visual consistency with design system color palette
- Reduces visual emphasis on vulnerability section for better UI balance
2026-01-06 12:01:59 +08:00
github-actions[bot]
81633642e6 chore: bump version to v1.3.16-dev 2026-01-06 03:55:16 +00:00
yyhuni
d1ec9b7f27 feat(settings): add global blacklist management page and UI integration
- Add new global blacklist settings page with pattern management interface
- Create useGlobalBlacklist and useUpdateGlobalBlacklist React Query hooks for data fetching and mutations
- Implement global-blacklist.service.ts with API integration for blacklist operations
- Add Global Blacklist navigation item to app sidebar with Ban icon
- Add internationalization support for blacklist UI with English and Chinese translations
- Include pattern matching rules documentation (domain wildcards, keywords, IP addresses, CIDR ranges)
- Add loading states, error handling, and success/error toast notifications
- Implement textarea input with change tracking and save button state management
2026-01-06 11:50:31 +08:00
yyhuni
2a3d9b4446 feat(target): add initiate scan button and improve overview layout
- Add "Initiate Scan" button to target overview header with Play icon
- Implement InitiateScanDialog component integration for quick scan initiation
- Improve scheduled scans card layout with flexbox for better vertical spacing
- Reduce displayed scheduled scans from 3 to 2 items for better UI balance
- Enhance vulnerability statistics card styling with proper flex layout
- Add state management for scan dialog open/close functionality
- Update i18n translations (en.json, zh.json) with "initiateScan" label
- Refactor target info section to accommodate new action button with justify-between layout
- Improve empty state centering in scheduled scans card using flex layout
2026-01-06 11:10:47 +08:00
yyhuni
9b63203b5a refactor(migrations,frontend,backend): reorganize app structure and enhance target management UI
- Consolidate common migrations into dedicated common app module
- Remove asset search materialized view migration (0002) and simplify migration structure
- Reorganize target detail page with new overview and settings sub-routes
- Add target overview component displaying key asset information
- Add target settings component for configuration management
- Enhance scan history UI with improved data table and column definitions
- Update scheduled scan dialog with better form handling
- Refactor target service with improved API integration
- Update scan hooks (use-scans, use-scheduled-scans) with better state management
- Add internationalization strings for new target management features
- Update Docker initialization and startup scripts for new app structure
- Bump Django to 5.2.7 and update dependencies in requirements.txt
- Add WeChat group contact information to README
- Improve UI tabs component with better accessibility and styling
2026-01-06 10:42:38 +08:00
yyhuni
6ff86e14ec Update README.md 2026-01-06 09:59:55 +08:00
yyhuni
4c1282e9bb 完成黑名单后端逻辑 2026-01-05 23:26:50 +08:00
yyhuni
ba3a9b709d feat(system-logs): enhance ANSI log viewer with log level colorization
- Add LOG_LEVEL_COLORS configuration mapping for DEBUG, INFO, WARNING, WARN, ERROR, and CRITICAL levels
- Implement hasAnsiCodes() function to detect presence of ANSI escape sequences in log content
- Add colorizeLogContent() function to parse plain text logs and apply color styling based on log levels
- Support dual-mode log parsing: ANSI color codes and plain text log level detection
- Rename converter to ansiConverter for clarity and consistency
- Change newline handling from true to false for manual line break control
- Apply color-coded styling to timestamps (gray), log levels (level-specific colors), and messages
- Add bold font-weight styling for CRITICAL level logs for better visibility
2026-01-05 16:27:31 +08:00
github-actions[bot]
283b28b46a chore: bump version to v1.3.15-dev 2026-01-05 02:05:29 +00:00
yyhuni
1269e5a314 refactor(scan): reorganize models and serializers into modular structure
- Split monolithic models.py into separate model files (scan_models.py, scan_log_model.py, scheduled_scan_model.py, subfinder_provider_settings_model.py)
- Split monolithic serializers.py into separate serializer files with dedicated modules for each domain
- Add SubfinderProviderSettings model to store API key configurations for subfinder data sources
- Create SubfinderProviderConfigService to generate provider configuration files dynamically
- Add subfinder_provider_settings views and serializers for API key management
- Update subdomain_discovery_flow to support provider configuration file generation and passing to subfinder
- Update command templates to use provider config file and remove recursive flag for better source coverage
- Add frontend settings page for managing API keys at /settings/api-keys
- Add frontend hooks and services for API key settings management
- Update sidebar navigation to include API keys settings link
- Add internationalization support for new API keys settings UI (English and Chinese)
- Improves code maintainability by organizing related models and serializers into logical modules
2026-01-05 10:00:19 +08:00
yyhuni
802e967906 docs: add online demo link to README
- Add new "🌐 在线 Demo" section with live demo URL
- Include disclaimer note that demo is UI-only without backend database
- Improve documentation to help users quickly access and test the application
2026-01-04 19:19:33 +08:00
github-actions[bot]
e446326416 chore: bump version to v1.3.14 2026-01-04 11:02:14 +00:00
yyhuni
e0abb3ce7b Merge branch 'dev' 2026-01-04 18:57:49 +08:00
yyhuni
d418baaf79 feat(mock,scan): add comprehensive mock data and improve system load management
- Add mock data files for directories, fingerprints, IP addresses, notification settings, nuclei templates, search, system logs, tools, and wordlists
- Update mock index to export new mock data modules
- Increase SCAN_LOAD_CHECK_INTERVAL from 30 to 180 seconds for better system stability
- Improve load check logging message to clarify OOM prevention strategy
- Enhance mock data infrastructure to support frontend development and testing
2026-01-04 18:52:08 +08:00
github-actions[bot]
f8da408580 chore: bump version to v1.3.13-dev 2026-01-04 10:24:10 +00:00
yyhuni
7cd4354d8f feat(scan,asset): add scan logging system and improve search view architecture
- Add user_logger utility for structured scan operation logging
- Create scan log views and API endpoints for retrieving scan execution logs
- Add scan-log-list component and use-scan-logs hook for frontend log display
- Refactor asset search views to remove ArrayField support from pg_ivm IMMV
- Update search_service.py to JOIN original tables for array field retrieval
- Add system architecture requirements (AMD64/ARM64) to README
- Update scan flow handlers to integrate logging system
- Enhance scan progress dialog with log viewer integration
- Add ANSI log viewer component for formatted log display
- Update scan service API to support log retrieval endpoints
- Migrate database schema to support new logging infrastructure
- Add internationalization strings for scan logs (en/zh)
This change improves observability of scan operations and resolves pg_ivm limitations with ArrayField types by fetching array data from original tables via JOIN operations.
2026-01-04 18:19:45 +08:00
yyhuni
6bf35a760f chore(docker): configure Prefect home directory in worker image
- Add PREFECT_HOME environment variable pointing to /app/.prefect
- Create Prefect configuration directory to prevent home directory warnings
- Update step numbering in Dockerfile comments for clarity
- Ensures Prefect can properly initialize configuration without relying on user home directory
2026-01-04 10:39:11 +08:00
github-actions[bot]
be9ecadffb chore: bump version to v1.3.12-dev 2026-01-04 01:05:00 +00:00
yyhuni
adb53c9f85 feat(asset,scan): add configurable statement timeout and improve CSV export
- Add statement_timeout_ms parameter to search_service count() and stream_search() methods for long-running exports
- Replace server-side cursors with OFFSET/LIMIT batching for better Django compatibility
- Introduce create_csv_export_response() utility function to standardize CSV export handling
- Add engine-preset-selector and scan-config-editor components for enhanced scan configuration UI
- Update YAML editor component with improved styling and functionality
- Add i18n translations for new scan configuration features in English and Chinese
- Refactor CSV export endpoints to use new utility function instead of manual StreamingHttpResponse
- Remove unused uuid import from search_service.py
- Update nginx configuration for improved performance
- Enhance search service with configurable timeout support for large dataset exports
2026-01-04 08:58:31 +08:00
yyhuni
7b7bbed634 Update README.md 2026-01-03 22:15:35 +08:00
github-actions[bot]
8dd3f0536e chore: bump version to v1.3.11-dev 2026-01-03 11:54:31 +00:00
yyhuni
8a8062a12d refactor(scan): rename merged_configuration to yaml_configuration
- Rename `merged_configuration` field to `yaml_configuration` in Scan and ScheduledScan models for clarity
- Update all references across scan repositories, services, views, and serializers
- Update database migration to reflect field name change with improved help text
- Update frontend components to use new field naming convention
- Add YAML editor component for improved configuration editing in UI
- Update engine configuration retrieval in initiate_scan_flow to use new field name
- Remove unused asset tasks __init__.py module
- Simplify README feedback section for better clarity
- Update frontend type definitions and internationalization messages for consistency
2026-01-03 19:50:20 +08:00
yyhuni
55908a2da5 fix(asset,scan): improve decorator usage and dialog layout
- Fix transaction.non_atomic_requests decorator usage in AssetSearchExportView by wrapping with method_decorator for proper class-based view compatibility
- Update scan progress dialog to use flexible width (sm:max-w-fit sm:min-w-[450px]) instead of fixed width for better responsiveness
- Refactor engine names display from single Badge to grid layout with multiple badges for improved readability when multiple engines are present
- Add proper spacing and alignment adjustments (gap-4, items-start) to accommodate multi-line engine badge display
- Add text-xs and whitespace-nowrap to engine badges for consistent styling in grid layout
2026-01-03 18:46:44 +08:00
github-actions[bot]
22a7d4f091 chore: bump version to v1.3.10-dev 2026-01-03 10:45:32 +00:00
yyhuni
f287f18134 更新锁定镜像 2026-01-03 18:33:25 +08:00
yyhuni
de27230b7a 更新构建ci 2026-01-03 18:28:57 +08:00
github-actions[bot]
15a6295189 chore: bump version to v1.3.8-dev 2026-01-03 10:24:17 +00:00
yyhuni
674acdac66 refactor(asset): move database extension initialization to migrations
- Remove pg_trgm and pg_ivm extension setup from AssetConfig.ready() method
- Move extension creation to migration 0002 using RunSQL operations
- Add pg_trgm extension creation for text search index support
- Add pg_ivm extension creation for IMMV incremental maintenance
- Generate unique cursor names in search_service to prevent concurrent request conflicts
- Add @transaction.non_atomic_requests decorator to export view for server-side cursor compatibility
- Simplify app initialization by delegating extension setup to database migrations
- Improve thread safety and concurrency handling for streaming exports
2026-01-03 18:20:27 +08:00
github-actions[bot]
c59152bedf chore: bump version to v1.3.7-dev 2026-01-03 09:56:39 +00:00
yyhuni
b4037202dc feat: use registry cache for faster builds 2026-01-03 17:35:54 +08:00
github-actions[bot]
08372588a4 chore: bump version to v1.2.15 2026-01-01 15:44:15 +00:00
203 changed files with 12314 additions and 2372 deletions

View File

@@ -19,7 +19,8 @@ permissions:
contents: write
jobs:
build:
# AMD64 构建(原生 x64 runner
build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
@@ -27,43 +28,30 @@ jobs:
- image: xingrin-server
dockerfile: docker/server/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-frontend
dockerfile: docker/frontend/Dockerfile
context: .
platforms: linux/amd64 # ARM64 构建时 Next.js 在 QEMU 下会崩溃
- image: xingrin-worker
dockerfile: docker/worker/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-nginx
dockerfile: docker/nginx/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-agent
dockerfile: docker/agent/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-postgres
dockerfile: docker/postgres/Dockerfile
context: docker/postgres
platforms: linux/amd64,linux/arm64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Free disk space (for large builds like worker)
- name: Free disk space
run: |
echo "=== Before cleanup ==="
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/share/dotnet /usr/local/lib/android /opt/ghc /opt/hostedtoolcache/CodeQL
sudo docker image prune -af
echo "=== After cleanup ==="
df -h
- name: Generate SSL certificates for nginx build
if: matrix.image == 'xingrin-nginx'
@@ -73,10 +61,6 @@ jobs:
-keyout docker/nginx/ssl/privkey.pem \
-out docker/nginx/ssl/fullchain.pem \
-subj "/CN=localhost"
echo "SSL certificates generated for CI build"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -87,7 +71,120 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version from git tag
- name: Get version
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
fi
- name: Build and push AMD64
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: linux/amd64
push: true
tags: ${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}-amd64
build-args: IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-amd64
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-amd64,mode=max
provenance: false
sbom: false
# ARM64 构建(原生 ARM64 runner
build-arm64:
runs-on: ubuntu-22.04-arm
strategy:
matrix:
include:
- image: xingrin-server
dockerfile: docker/server/Dockerfile
context: .
- image: xingrin-frontend
dockerfile: docker/frontend/Dockerfile
context: .
- image: xingrin-worker
dockerfile: docker/worker/Dockerfile
context: .
- image: xingrin-nginx
dockerfile: docker/nginx/Dockerfile
context: .
- image: xingrin-agent
dockerfile: docker/agent/Dockerfile
context: .
- image: xingrin-postgres
dockerfile: docker/postgres/Dockerfile
context: docker/postgres
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Generate SSL certificates for nginx build
if: matrix.image == 'xingrin-nginx'
run: |
mkdir -p docker/nginx/ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout docker/nginx/ssl/privkey.pem \
-out docker/nginx/ssl/fullchain.pem \
-subj "/CN=localhost"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
fi
- name: Build and push ARM64
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: linux/arm64
push: true
tags: ${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}-arm64
build-args: IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-arm64
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-arm64,mode=max
provenance: false
sbom: false
# 合并多架构 manifest
merge-manifests:
runs-on: ubuntu-latest
needs: [build-amd64, build-arm64]
strategy:
matrix:
image:
- xingrin-server
- xingrin-frontend
- xingrin-worker
- xingrin-nginx
- xingrin-agent
- xingrin-postgres
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
@@ -98,28 +195,27 @@ jobs:
echo "IS_RELEASE=false" >> $GITHUB_OUTPUT
fi
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: true
tags: |
${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}
${{ steps.version.outputs.IS_RELEASE == 'true' && format('{0}/{1}:latest', env.IMAGE_PREFIX, matrix.image) || '' }}
build-args: |
IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=gha,scope=${{ matrix.image }}
cache-to: type=gha,mode=max,scope=${{ matrix.image }}
provenance: false
sbom: false
- name: Create and push multi-arch manifest
run: |
VERSION=${{ steps.version.outputs.VERSION }}
IMAGE=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}
docker manifest create ${IMAGE}:${VERSION} \
${IMAGE}:${VERSION}-amd64 \
${IMAGE}:${VERSION}-arm64
docker manifest push ${IMAGE}:${VERSION}
if [[ "${{ steps.version.outputs.IS_RELEASE }}" == "true" ]]; then
docker manifest create ${IMAGE}:latest \
${IMAGE}:${VERSION}-amd64 \
${IMAGE}:${VERSION}-arm64
docker manifest push ${IMAGE}:latest
fi
# 所有镜像构建成功后,更新 VERSION 文件
# 根据 tag 所在的分支更新对应分支的 VERSION 文件
# 更新 VERSION 文件
update-version:
runs-on: ubuntu-latest
needs: build
needs: merge-manifests
if: startsWith(github.ref, 'refs/tags/v')
steps:
- name: Checkout repository

View File

@@ -25,6 +25,13 @@
---
## 🌐 在线 Demo
**[https://xingrin.vercel.app/](https://xingrin.vercel.app/)**
> ⚠️ 仅用于 UI 展示,未接入后端数据库
---
<p align="center">
<b>🎨 现代化 UI </b>
@@ -62,11 +69,21 @@
- **自定义流程** - YAML 配置扫描流程,灵活编排
- **定时扫描** - Cron 表达式配置,自动化周期扫描
### 🚫 黑名单过滤
- **两层黑名单** - 全局黑名单 + Target 级黑名单,灵活控制扫描范围
- **智能规则识别** - 自动识别域名通配符(`*.gov`、IP、CIDR 网段
### 🔖 指纹识别
- **多源指纹库** - 内置 EHole、Goby、Wappalyzer、Fingers、FingerPrintHub、ARL 等 2.7W+ 指纹规则
- **自动识别** - 扫描流程自动执行,识别 Web 应用技术栈
- **指纹管理** - 支持查询、导入、导出指纹规则
### 📸 站点截图
- **自动截图** - 使用 Playwright 对发现的网站自动截图
- **WebP 格式** - 高压缩比存储500k图片压缩存储只占几十K
- **多来源支持** - 支持对 Websites、Endpoints 等不同来源的 URL 截图
- **资产关联** - 截图自动同步到资产表,方便查看
#### 扫描流程架构
完整的扫描流程包括子域名发现、端口扫描、站点发现、指纹识别、URL 收集、目录扫描、漏洞扫描等阶段
@@ -88,6 +105,7 @@ flowchart LR
direction TB
URL["URL 收集<br/>waymore, katana"]
DIR["目录扫描<br/>ffuf"]
SCREENSHOT["站点截图<br/>playwright"]
end
subgraph STAGE3["阶段 3: 漏洞检测"]
@@ -112,6 +130,7 @@ flowchart LR
style FINGER fill:#5dade2,stroke:#3498db,stroke-width:1px,color:#fff
style URL fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style DIR fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style SCREENSHOT fill:#bb8fce,stroke:#9b59b6,stroke-width:1px,color:#fff
style VULN fill:#f0b27a,stroke:#e67e22,stroke-width:1px,color:#fff
```
@@ -198,6 +217,7 @@ url="/api/v1" && status!="404"
### 环境要求
- **操作系统**: Ubuntu 20.04+ / Debian 11+
- **系统架构**: AMD64 (x86_64) / ARM64 (aarch64)
- **硬件**: 2核 4G 内存起步20GB+ 磁盘空间
### 一键安装
@@ -217,7 +237,6 @@ sudo ./install.sh --mirror
> **💡 --mirror 参数说明**
> - 自动配置 Docker 镜像加速(国内镜像源)
> - 加速 Git 仓库克隆Nuclei 模板等)
> - 大幅提升安装速度,避免网络超时
### 访问服务
@@ -242,17 +261,40 @@ sudo ./uninstall.sh
## 🤝 反馈与贡献
- 🐛 **如果发现 Bug** 可以点击右边链接进行提交 [Issue](https://github.com/yyhuni/xingrin/issues)
- 💡 **有新想法比如UI设计功能设计等** 欢迎点击右边链接进行提交建议 [Issue](https://github.com/yyhuni/xingrin/issues)
- 💡 **发现 Bug有新想法比如UI设计功能设计等** 欢迎点击右边链接进行提交建议 [Issue](https://github.com/yyhuni/xingrin/issues) 或者公众号私信
## 📧 联系
- 目前版本就我个人使用,可能会有很多边界问题
- 如有问题,建议,其他,优先提交[Issue](https://github.com/yyhuni/xingrin/issues),也可以直接给我的公众号发消息,我都会回复的
- 微信公众号: **塔罗安全学苑**
- 微信群去公众号底下的菜单,有个交流群,点击就可以看到了,链接过期可以私信我拉你
<img src="docs/wechat-qrcode.png" alt="微信公众号" width="200">
### 🎁 关注公众号免费领取指纹库
| 指纹库 | 数量 |
|--------|------|
| ehole.json | 21,977 |
| ARL.yaml | 9,264 |
| goby.json | 7,086 |
| FingerprintHub.json | 3,147 |
> 💡 关注公众号回复「指纹」即可获取
## ☕ 赞助支持
如果这个项目对你有帮助谢谢请我能喝杯蜜雪冰城你的star和赞助是我免费更新的动力
<p>
<img src="docs/wx_pay.jpg" alt="微信支付" width="200">
<img src="docs/zfb_pay.jpg" alt="支付宝" width="200">
</p>
### 🙏 感谢以下赞助
| 昵称 | 金额 |
|------|------|
| X闭关中 | ¥88 |
## ⚠️ 免责声明

View File

@@ -1 +1 @@
v1.3.5-dev
v1.4.1

View File

@@ -1,106 +1,6 @@
import logging
import sys
from django.apps import AppConfig
logger = logging.getLogger(__name__)
class AssetConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'apps.asset'
def ready(self):
# 导入所有模型以确保Django发现并注册
from . import models
# 启用 pg_trgm 扩展(用于文本模糊搜索索引)
# 用于已有数据库升级场景
self._ensure_pg_trgm_extension()
# 验证 pg_ivm 扩展是否可用(用于 IMMV 增量维护)
self._verify_pg_ivm_extension()
def _ensure_pg_trgm_extension(self):
"""
确保 pg_trgm 扩展已启用。
该扩展用于 response_body 和 response_headers 字段的 GIN 索引,
支持高效的文本模糊搜索。
"""
from django.db import connection
# 检查是否为 PostgreSQL 数据库
if connection.vendor != 'postgresql':
logger.debug("跳过 pg_trgm 扩展:当前数据库不是 PostgreSQL")
return
try:
with connection.cursor() as cursor:
cursor.execute("CREATE EXTENSION IF NOT EXISTS pg_trgm;")
logger.debug("pg_trgm 扩展已启用")
except Exception as e:
# 记录错误但不阻止应用启动
# 常见原因:权限不足(需要超级用户权限)
logger.warning(
"无法创建 pg_trgm 扩展: %s"
"这可能导致 response_body 和 response_headers 字段的 GIN 索引无法正常工作。"
"请手动执行: CREATE EXTENSION IF NOT EXISTS pg_trgm;",
str(e)
)
def _verify_pg_ivm_extension(self):
"""
验证 pg_ivm 扩展是否可用。
pg_ivm 用于 IMMV增量维护物化视图是系统必需的扩展。
如果不可用,将记录错误并退出。
"""
from django.db import connection
# 检查是否为 PostgreSQL 数据库
if connection.vendor != 'postgresql':
logger.debug("跳过 pg_ivm 验证:当前数据库不是 PostgreSQL")
return
# 跳过某些管理命令(如 migrate、makemigrations
import sys
if len(sys.argv) > 1 and sys.argv[1] in ('migrate', 'makemigrations', 'collectstatic', 'check'):
logger.debug("跳过 pg_ivm 验证:当前为管理命令")
return
try:
with connection.cursor() as cursor:
# 检查 pg_ivm 扩展是否已安装
cursor.execute("""
SELECT COUNT(*) FROM pg_extension WHERE extname = 'pg_ivm'
""")
count = cursor.fetchone()[0]
if count > 0:
logger.info("✓ pg_ivm 扩展已启用")
else:
# 尝试创建扩展
try:
cursor.execute("CREATE EXTENSION IF NOT EXISTS pg_ivm;")
logger.info("✓ pg_ivm 扩展已创建并启用")
except Exception as create_error:
logger.error(
"=" * 60 + "\n"
"错误: pg_ivm 扩展未安装\n"
"=" * 60 + "\n"
"pg_ivm 是系统必需的扩展,用于增量维护物化视图。\n\n"
"请在 PostgreSQL 服务器上安装 pg_ivm\n"
" curl -sSL https://raw.githubusercontent.com/yyhuni/xingrin/main/docker/scripts/install-pg-ivm.sh | sudo bash\n\n"
"或手动安装:\n"
" 1. apt install build-essential postgresql-server-dev-15 git\n"
" 2. git clone https://github.com/sraoss/pg_ivm.git && cd pg_ivm && make && make install\n"
" 3. 在 postgresql.conf 中添加: shared_preload_libraries = 'pg_ivm'\n"
" 4. 重启 PostgreSQL\n"
"=" * 60
)
# 在生产环境中退出,开发环境中仅警告
from django.conf import settings
if not settings.DEBUG:
sys.exit(1)
except Exception as e:
logger.error(f"pg_ivm 扩展验证失败: {e}")

View File

@@ -1,4 +1,4 @@
# Generated by Django 5.2.7 on 2026-01-02 04:45
# Generated by Django 5.2.7 on 2026-01-06 00:55
import django.contrib.postgres.fields
import django.contrib.postgres.indexes

View File

@@ -1,187 +0,0 @@
"""
创建资产搜索 IMMV增量维护物化视图
使用 pg_ivm 扩展创建 IMMV数据变更时自动增量更新无需手动刷新。
包含:
1. asset_search_view - Website 搜索视图
2. endpoint_search_view - Endpoint 搜索视图
"""
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('asset', '0001_initial'),
]
operations = [
# 1. 确保 pg_ivm 扩展已启用
migrations.RunSQL(
sql="CREATE EXTENSION IF NOT EXISTS pg_ivm;",
reverse_sql="-- pg_ivm extension kept for other uses"
),
# ==================== Website IMMV ====================
# 2. 创建 asset_search_view IMMV
migrations.RunSQL(
sql="""
SELECT pgivm.create_immv('asset_search_view', $$
SELECT
w.id,
w.url,
w.host,
w.title,
w.tech,
w.status_code,
w.response_headers,
w.response_body,
w.content_type,
w.content_length,
w.webserver,
w.location,
w.vhost,
w.created_at,
w.target_id
FROM website w
$$);
""",
reverse_sql="SELECT pgivm.drop_immv('asset_search_view');"
),
# 3. 创建 asset_search_view 索引
migrations.RunSQL(
sql="""
-- 唯一索引
CREATE UNIQUE INDEX IF NOT EXISTS asset_search_view_id_idx
ON asset_search_view (id);
-- host 模糊搜索索引
CREATE INDEX IF NOT EXISTS asset_search_view_host_trgm_idx
ON asset_search_view USING gin (host gin_trgm_ops);
-- title 模糊搜索索引
CREATE INDEX IF NOT EXISTS asset_search_view_title_trgm_idx
ON asset_search_view USING gin (title gin_trgm_ops);
-- url 模糊搜索索引
CREATE INDEX IF NOT EXISTS asset_search_view_url_trgm_idx
ON asset_search_view USING gin (url gin_trgm_ops);
-- response_headers 模糊搜索索引
CREATE INDEX IF NOT EXISTS asset_search_view_headers_trgm_idx
ON asset_search_view USING gin (response_headers gin_trgm_ops);
-- response_body 模糊搜索索引
CREATE INDEX IF NOT EXISTS asset_search_view_body_trgm_idx
ON asset_search_view USING gin (response_body gin_trgm_ops);
-- tech 数组索引
CREATE INDEX IF NOT EXISTS asset_search_view_tech_idx
ON asset_search_view USING gin (tech);
-- status_code 索引
CREATE INDEX IF NOT EXISTS asset_search_view_status_idx
ON asset_search_view (status_code);
-- created_at 排序索引
CREATE INDEX IF NOT EXISTS asset_search_view_created_idx
ON asset_search_view (created_at DESC);
""",
reverse_sql="""
DROP INDEX IF EXISTS asset_search_view_id_idx;
DROP INDEX IF EXISTS asset_search_view_host_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_title_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_url_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_headers_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_body_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_tech_idx;
DROP INDEX IF EXISTS asset_search_view_status_idx;
DROP INDEX IF EXISTS asset_search_view_created_idx;
"""
),
# ==================== Endpoint IMMV ====================
# 4. 创建 endpoint_search_view IMMV
migrations.RunSQL(
sql="""
SELECT pgivm.create_immv('endpoint_search_view', $$
SELECT
e.id,
e.url,
e.host,
e.title,
e.tech,
e.status_code,
e.response_headers,
e.response_body,
e.content_type,
e.content_length,
e.webserver,
e.location,
e.vhost,
e.matched_gf_patterns,
e.created_at,
e.target_id
FROM endpoint e
$$);
""",
reverse_sql="SELECT pgivm.drop_immv('endpoint_search_view');"
),
# 5. 创建 endpoint_search_view 索引
migrations.RunSQL(
sql="""
-- 唯一索引
CREATE UNIQUE INDEX IF NOT EXISTS endpoint_search_view_id_idx
ON endpoint_search_view (id);
-- host 模糊搜索索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_host_trgm_idx
ON endpoint_search_view USING gin (host gin_trgm_ops);
-- title 模糊搜索索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_title_trgm_idx
ON endpoint_search_view USING gin (title gin_trgm_ops);
-- url 模糊搜索索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_url_trgm_idx
ON endpoint_search_view USING gin (url gin_trgm_ops);
-- response_headers 模糊搜索索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_headers_trgm_idx
ON endpoint_search_view USING gin (response_headers gin_trgm_ops);
-- response_body 模糊搜索索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_body_trgm_idx
ON endpoint_search_view USING gin (response_body gin_trgm_ops);
-- tech 数组索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_tech_idx
ON endpoint_search_view USING gin (tech);
-- status_code 索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_status_idx
ON endpoint_search_view (status_code);
-- created_at 排序索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_created_idx
ON endpoint_search_view (created_at DESC);
""",
reverse_sql="""
DROP INDEX IF EXISTS endpoint_search_view_id_idx;
DROP INDEX IF EXISTS endpoint_search_view_host_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_title_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_url_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_headers_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_body_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_tech_idx;
DROP INDEX IF EXISTS endpoint_search_view_status_idx;
DROP INDEX IF EXISTS endpoint_search_view_created_idx;
"""
),
]

View File

@@ -0,0 +1,104 @@
"""
创建资产搜索物化视图(使用 pg_ivm 增量维护)
这些视图用于资产搜索功能,提供高性能的全文搜索能力。
"""
from django.db import migrations
class Migration(migrations.Migration):
"""创建资产搜索所需的增量物化视图"""
dependencies = [
('asset', '0001_initial'),
]
operations = [
# 1. 确保 pg_ivm 扩展已安装
migrations.RunSQL(
sql="CREATE EXTENSION IF NOT EXISTS pg_ivm;",
reverse_sql="DROP EXTENSION IF EXISTS pg_ivm;",
),
# 2. 创建 Website 搜索视图
# 注意pg_ivm 不支持 ArrayField所以 tech 字段需要从原表 JOIN 获取
migrations.RunSQL(
sql="""
SELECT pgivm.create_immv('asset_search_view', $$
SELECT
w.id,
w.url,
w.host,
w.title,
w.status_code,
w.response_headers,
w.response_body,
w.content_type,
w.content_length,
w.webserver,
w.location,
w.vhost,
w.created_at,
w.target_id
FROM website w
$$);
""",
reverse_sql="DROP TABLE IF EXISTS asset_search_view CASCADE;",
),
# 3. 创建 Endpoint 搜索视图
migrations.RunSQL(
sql="""
SELECT pgivm.create_immv('endpoint_search_view', $$
SELECT
e.id,
e.url,
e.host,
e.title,
e.status_code,
e.response_headers,
e.response_body,
e.content_type,
e.content_length,
e.webserver,
e.location,
e.vhost,
e.created_at,
e.target_id
FROM endpoint e
$$);
""",
reverse_sql="DROP TABLE IF EXISTS endpoint_search_view CASCADE;",
),
# 4. 为搜索视图创建索引(加速查询)
migrations.RunSQL(
sql=[
# Website 搜索视图索引
"CREATE INDEX IF NOT EXISTS asset_search_view_host_idx ON asset_search_view (host);",
"CREATE INDEX IF NOT EXISTS asset_search_view_url_idx ON asset_search_view (url);",
"CREATE INDEX IF NOT EXISTS asset_search_view_title_idx ON asset_search_view (title);",
"CREATE INDEX IF NOT EXISTS asset_search_view_status_idx ON asset_search_view (status_code);",
"CREATE INDEX IF NOT EXISTS asset_search_view_created_idx ON asset_search_view (created_at DESC);",
# Endpoint 搜索视图索引
"CREATE INDEX IF NOT EXISTS endpoint_search_view_host_idx ON endpoint_search_view (host);",
"CREATE INDEX IF NOT EXISTS endpoint_search_view_url_idx ON endpoint_search_view (url);",
"CREATE INDEX IF NOT EXISTS endpoint_search_view_title_idx ON endpoint_search_view (title);",
"CREATE INDEX IF NOT EXISTS endpoint_search_view_status_idx ON endpoint_search_view (status_code);",
"CREATE INDEX IF NOT EXISTS endpoint_search_view_created_idx ON endpoint_search_view (created_at DESC);",
],
reverse_sql=[
"DROP INDEX IF EXISTS asset_search_view_host_idx;",
"DROP INDEX IF EXISTS asset_search_view_url_idx;",
"DROP INDEX IF EXISTS asset_search_view_title_idx;",
"DROP INDEX IF EXISTS asset_search_view_status_idx;",
"DROP INDEX IF EXISTS asset_search_view_created_idx;",
"DROP INDEX IF EXISTS endpoint_search_view_host_idx;",
"DROP INDEX IF EXISTS endpoint_search_view_url_idx;",
"DROP INDEX IF EXISTS endpoint_search_view_title_idx;",
"DROP INDEX IF EXISTS endpoint_search_view_status_idx;",
"DROP INDEX IF EXISTS endpoint_search_view_created_idx;",
],
),
]

View File

@@ -0,0 +1,53 @@
# Generated by Django 5.2.7 on 2026-01-07 02:21
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('asset', '0002_create_search_views'),
('scan', '0001_initial'),
('targets', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Screenshot',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('url', models.TextField(help_text='截图对应的 URL')),
('image', models.BinaryField(help_text='截图 WebP 二进制数据(压缩后)')),
('created_at', models.DateTimeField(auto_now_add=True, help_text='创建时间')),
('updated_at', models.DateTimeField(auto_now=True, help_text='更新时间')),
('target', models.ForeignKey(help_text='所属目标', on_delete=django.db.models.deletion.CASCADE, related_name='screenshots', to='targets.target')),
],
options={
'verbose_name': '截图',
'verbose_name_plural': '截图',
'db_table': 'screenshot',
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['target'], name='screenshot_target__2f01f6_idx'), models.Index(fields=['-created_at'], name='screenshot_created_c0ad4b_idx')],
'constraints': [models.UniqueConstraint(fields=('target', 'url'), name='unique_screenshot_per_target')],
},
),
migrations.CreateModel(
name='ScreenshotSnapshot',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('url', models.TextField(help_text='截图对应的 URL')),
('image', models.BinaryField(help_text='截图 WebP 二进制数据(压缩后)')),
('created_at', models.DateTimeField(auto_now_add=True, help_text='创建时间')),
('scan', models.ForeignKey(help_text='所属的扫描任务', on_delete=django.db.models.deletion.CASCADE, related_name='screenshot_snapshots', to='scan.scan')),
],
options={
'verbose_name': '截图快照',
'verbose_name_plural': '截图快照',
'db_table': 'screenshot_snapshot',
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['scan'], name='screenshot__scan_id_fb8c4d_idx'), models.Index(fields=['-created_at'], name='screenshot__created_804117_idx')],
'constraints': [models.UniqueConstraint(fields=('scan', 'url'), name='unique_screenshot_per_scan_snapshot')],
},
),
]

View File

@@ -0,0 +1,23 @@
# Generated by Django 5.2.7 on 2026-01-07 13:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('asset', '0003_add_screenshot_models'),
]
operations = [
migrations.AddField(
model_name='screenshot',
name='status_code',
field=models.SmallIntegerField(blank=True, help_text='HTTP 响应状态码', null=True),
),
migrations.AddField(
model_name='screenshotsnapshot',
name='status_code',
field=models.SmallIntegerField(blank=True, help_text='HTTP 响应状态码', null=True),
),
]

View File

@@ -20,6 +20,12 @@ from .snapshot_models import (
VulnerabilitySnapshot,
)
# 截图模型
from .screenshot_models import (
Screenshot,
ScreenshotSnapshot,
)
# 统计模型
from .statistics_models import AssetStatistics, StatisticsHistory
@@ -39,6 +45,9 @@ __all__ = [
'HostPortMappingSnapshot',
'EndpointSnapshot',
'VulnerabilitySnapshot',
# 截图模型
'Screenshot',
'ScreenshotSnapshot',
# 统计模型
'AssetStatistics',
'StatisticsHistory',

View File

@@ -0,0 +1,80 @@
from django.db import models
class ScreenshotSnapshot(models.Model):
"""
截图快照
记录:某次扫描中捕获的网站截图
"""
id = models.AutoField(primary_key=True)
scan = models.ForeignKey(
'scan.Scan',
on_delete=models.CASCADE,
related_name='screenshot_snapshots',
help_text='所属的扫描任务'
)
url = models.TextField(help_text='截图对应的 URL')
status_code = models.SmallIntegerField(null=True, blank=True, help_text='HTTP 响应状态码')
image = models.BinaryField(help_text='截图 WebP 二进制数据(压缩后)')
created_at = models.DateTimeField(auto_now_add=True, help_text='创建时间')
class Meta:
db_table = 'screenshot_snapshot'
verbose_name = '截图快照'
verbose_name_plural = '截图快照'
ordering = ['-created_at']
indexes = [
models.Index(fields=['scan']),
models.Index(fields=['-created_at']),
]
constraints = [
models.UniqueConstraint(
fields=['scan', 'url'],
name='unique_screenshot_per_scan_snapshot'
),
]
def __str__(self):
return f'{self.url} (Scan #{self.scan_id})'
class Screenshot(models.Model):
"""
截图资产
存储:目标的最新截图(从快照同步)
"""
id = models.AutoField(primary_key=True)
target = models.ForeignKey(
'targets.Target',
on_delete=models.CASCADE,
related_name='screenshots',
help_text='所属目标'
)
url = models.TextField(help_text='截图对应的 URL')
status_code = models.SmallIntegerField(null=True, blank=True, help_text='HTTP 响应状态码')
image = models.BinaryField(help_text='截图 WebP 二进制数据(压缩后)')
created_at = models.DateTimeField(auto_now_add=True, help_text='创建时间')
updated_at = models.DateTimeField(auto_now=True, help_text='更新时间')
class Meta:
db_table = 'screenshot'
verbose_name = '截图'
verbose_name_plural = '截图'
ordering = ['-created_at']
indexes = [
models.Index(fields=['target']),
models.Index(fields=['-created_at']),
]
constraints = [
models.UniqueConstraint(
fields=['target', 'url'],
name='unique_screenshot_per_target'
),
]
def __str__(self):
return f'{self.url} (Target #{self.target_id})'

View File

@@ -7,6 +7,7 @@ from .models.snapshot_models import (
EndpointSnapshot,
VulnerabilitySnapshot,
)
from .models.screenshot_models import Screenshot, ScreenshotSnapshot
# 注意IPAddress 和 Port 模型已被重构为 HostPortMapping
@@ -290,3 +291,23 @@ class EndpointSnapshotSerializer(serializers.ModelSerializer):
'created_at',
]
read_only_fields = fields
# ==================== 截图序列化器 ====================
class ScreenshotListSerializer(serializers.ModelSerializer):
"""截图资产列表序列化器(不包含 image 字段)"""
class Meta:
model = Screenshot
fields = ['id', 'url', 'status_code', 'created_at', 'updated_at']
read_only_fields = fields
class ScreenshotSnapshotListSerializer(serializers.ModelSerializer):
"""截图快照列表序列化器(不包含 image 字段)"""
class Meta:
model = ScreenshotSnapshot
fields = ['id', 'url', 'status_code', 'created_at']
read_only_fields = fields

View File

@@ -0,0 +1,186 @@
"""
Playwright 截图服务
使用 Playwright 异步批量捕获网站截图
"""
import asyncio
import logging
from typing import Optional, AsyncGenerator
logger = logging.getLogger(__name__)
class PlaywrightScreenshotService:
"""Playwright 截图服务 - 异步多 Page 并发截图"""
# 内置默认值(不暴露给用户)
DEFAULT_VIEWPORT_WIDTH = 1920
DEFAULT_VIEWPORT_HEIGHT = 1080
DEFAULT_TIMEOUT = 30000 # 毫秒
DEFAULT_JPEG_QUALITY = 85
def __init__(
self,
viewport_width: int = DEFAULT_VIEWPORT_WIDTH,
viewport_height: int = DEFAULT_VIEWPORT_HEIGHT,
timeout: int = DEFAULT_TIMEOUT,
concurrency: int = 5
):
"""
初始化 Playwright 截图服务
Args:
viewport_width: 视口宽度(像素)
viewport_height: 视口高度(像素)
timeout: 页面加载超时时间(毫秒)
concurrency: 并发截图数
"""
self.viewport_width = viewport_width
self.viewport_height = viewport_height
self.timeout = timeout
self.concurrency = concurrency
async def capture_screenshot(self, url: str, page) -> tuple[Optional[bytes], Optional[int]]:
"""
捕获单个 URL 的截图
Args:
url: 目标 URL
page: Playwright Page 对象
Returns:
(screenshot_bytes, status_code) 元组
- screenshot_bytes: JPEG 格式的截图字节数据,失败返回 None
- status_code: HTTP 响应状态码,失败返回 None
"""
status_code = None
try:
# 尝试加载页面,即使返回错误状态码也继续截图
try:
response = await page.goto(url, timeout=self.timeout, wait_until='networkidle')
if response:
status_code = response.status
except Exception as goto_error:
# 页面加载失败4xx/5xx 或其他错误),但页面可能已部分渲染
# 仍然尝试截图以捕获错误页面
logger.debug("页面加载异常但尝试截图: %s, 错误: %s", url, str(goto_error)[:50])
# 尝试截图(即使 goto 失败)
screenshot_bytes = await page.screenshot(
type='jpeg',
quality=self.DEFAULT_JPEG_QUALITY,
full_page=False
)
return (screenshot_bytes, status_code)
except asyncio.TimeoutError:
logger.warning("截图超时: %s", url)
return (None, None)
except Exception as e:
logger.warning("截图失败: %s, 错误: %s", url, str(e)[:100])
return (None, None)
async def _capture_with_semaphore(
self,
url: str,
context,
semaphore: asyncio.Semaphore
) -> tuple[str, Optional[bytes], Optional[int]]:
"""
使用信号量控制并发的截图任务
Args:
url: 目标 URL
context: Playwright BrowserContext
semaphore: 并发控制信号量
Returns:
(url, screenshot_bytes, status_code) 元组
"""
async with semaphore:
page = await context.new_page()
try:
screenshot_bytes, status_code = await self.capture_screenshot(url, page)
return (url, screenshot_bytes, status_code)
finally:
await page.close()
async def capture_batch(
self,
urls: list[str]
) -> AsyncGenerator[tuple[str, Optional[bytes], Optional[int]], None]:
"""
批量捕获截图(异步生成器)
使用单个 BrowserContext + 多 Page 并发模式
通过 Semaphore 控制并发数
Args:
urls: URL 列表
Yields:
(url, screenshot_bytes, status_code) 元组
"""
if not urls:
return
from playwright.async_api import async_playwright
async with async_playwright() as p:
# 启动浏览器headless 模式)
browser = await p.chromium.launch(
headless=True,
args=[
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-dev-shm-usage',
'--disable-gpu'
]
)
try:
# 创建单个 context
context = await browser.new_context(
viewport={
'width': self.viewport_width,
'height': self.viewport_height
},
ignore_https_errors=True,
user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
)
# 使用 Semaphore 控制并发
semaphore = asyncio.Semaphore(self.concurrency)
# 创建所有任务
tasks = [
self._capture_with_semaphore(url, context, semaphore)
for url in urls
]
# 使用 as_completed 实现流式返回
for coro in asyncio.as_completed(tasks):
result = await coro
yield result
await context.close()
finally:
await browser.close()
async def capture_batch_collect(
self,
urls: list[str]
) -> list[tuple[str, Optional[bytes], Optional[int]]]:
"""
批量捕获截图(收集所有结果)
Args:
urls: URL 列表
Returns:
[(url, screenshot_bytes, status_code), ...] 列表
"""
results = []
async for result in self.capture_batch(urls):
results.append(result)
return results

View File

@@ -0,0 +1,185 @@
"""
截图服务
负责截图的压缩、保存和同步
"""
import io
import logging
import os
from typing import Optional
from PIL import Image
logger = logging.getLogger(__name__)
class ScreenshotService:
"""截图服务 - 负责压缩、保存和同步"""
def __init__(self, max_width: int = 800, target_kb: int = 100):
"""
初始化截图服务
Args:
max_width: 最大宽度(像素)
target_kb: 目标文件大小KB
"""
self.max_width = max_width
self.target_kb = target_kb
def compress_screenshot(self, image_path: str) -> Optional[bytes]:
"""
压缩截图为 WebP 格式
Args:
image_path: PNG 截图文件路径
Returns:
压缩后的 WebP 二进制数据,失败返回 None
"""
if not os.path.exists(image_path):
logger.warning(f"截图文件不存在: {image_path}")
return None
try:
with Image.open(image_path) as img:
return self._compress_image(img)
except Exception as e:
logger.error(f"压缩截图失败: {image_path}, 错误: {e}")
return None
def compress_from_bytes(self, image_bytes: bytes) -> Optional[bytes]:
"""
从字节数据压缩截图为 WebP 格式
Args:
image_bytes: JPEG/PNG 图片字节数据
Returns:
压缩后的 WebP 二进制数据,失败返回 None
"""
if not image_bytes:
return None
try:
img = Image.open(io.BytesIO(image_bytes))
return self._compress_image(img)
except Exception as e:
logger.error(f"从字节压缩截图失败: {e}")
return None
def _compress_image(self, img: Image.Image) -> Optional[bytes]:
"""
压缩 PIL Image 对象为 WebP 格式
Args:
img: PIL Image 对象
Returns:
压缩后的 WebP 二进制数据
"""
try:
if img.mode in ('RGBA', 'P'):
img = img.convert('RGB')
width, height = img.size
if width > self.max_width:
ratio = self.max_width / width
new_size = (self.max_width, int(height * ratio))
img = img.resize(new_size, Image.Resampling.LANCZOS)
quality = 80
while quality >= 10:
buffer = io.BytesIO()
img.save(buffer, format='WEBP', quality=quality, method=6)
if len(buffer.getvalue()) <= self.target_kb * 1024:
return buffer.getvalue()
quality -= 10
return buffer.getvalue()
except Exception as e:
logger.error(f"压缩图片失败: {e}")
return None
def save_screenshot_snapshot(
self,
scan_id: int,
url: str,
image_data: bytes,
status_code: int | None = None
) -> bool:
"""
保存截图快照到 ScreenshotSnapshot 表
Args:
scan_id: 扫描 ID
url: 截图对应的 URL
image_data: 压缩后的图片二进制数据
status_code: HTTP 响应状态码
Returns:
是否保存成功
"""
from apps.asset.models import ScreenshotSnapshot
try:
ScreenshotSnapshot.objects.update_or_create(
scan_id=scan_id,
url=url,
defaults={'image': image_data, 'status_code': status_code}
)
return True
except Exception as e:
logger.error(f"保存截图快照失败: scan_id={scan_id}, url={url}, 错误: {e}")
return False
def sync_screenshots_to_asset(self, scan_id: int, target_id: int) -> int:
"""
将扫描的截图快照同步到资产表
Args:
scan_id: 扫描 ID
target_id: 目标 ID
Returns:
同步的截图数量
"""
from apps.asset.models import Screenshot, ScreenshotSnapshot
snapshots = ScreenshotSnapshot.objects.filter(scan_id=scan_id)
count = 0
for snapshot in snapshots:
try:
Screenshot.objects.update_or_create(
target_id=target_id,
url=snapshot.url,
defaults={
'image': snapshot.image,
'status_code': snapshot.status_code
}
)
count += 1
except Exception as e:
logger.error(f"同步截图到资产表失败: url={snapshot.url}, 错误: {e}")
logger.info(f"同步截图完成: scan_id={scan_id}, target_id={target_id}, 数量={count}")
return count
def process_and_save_screenshot(self, scan_id: int, url: str, image_path: str) -> bool:
"""
处理并保存截图(压缩 + 保存快照)
Args:
scan_id: 扫描 ID
url: 截图对应的 URL
image_path: PNG 截图文件路径
Returns:
是否处理成功
"""
image_data = self.compress_screenshot(image_path)
if image_data is None:
return False
return self.save_screenshot_snapshot(scan_id, url, image_data)

View File

@@ -11,7 +11,7 @@
import logging
import re
from typing import Optional, List, Dict, Any, Tuple, Literal
from typing import Optional, List, Dict, Any, Tuple, Literal, Iterator
from django.db import connection
@@ -37,46 +37,55 @@ VIEW_MAPPING = {
'endpoint': 'endpoint_search_view',
}
# 资产类型到原表名的映射(用于 JOIN 获取数组字段)
# ⚠️ 重要pg_ivm 不支持 ArrayField所有数组字段必须从原表 JOIN 获取
TABLE_MAPPING = {
'website': 'website',
'endpoint': 'endpoint',
}
# 有效的资产类型
VALID_ASSET_TYPES = {'website', 'endpoint'}
# Website 查询字段
# Website 查询字段v=视图t=原表)
# ⚠️ 注意t.tech 从原表获取,因为 pg_ivm 不支持 ArrayField
WEBSITE_SELECT_FIELDS = """
id,
url,
host,
title,
tech,
status_code,
response_headers,
response_body,
content_type,
content_length,
webserver,
location,
vhost,
created_at,
target_id
v.id,
v.url,
v.host,
v.title,
t.tech, -- ArrayField从 website 表 JOIN 获取
v.status_code,
v.response_headers,
v.response_body,
v.content_type,
v.content_length,
v.webserver,
v.location,
v.vhost,
v.created_at,
v.target_id
"""
# Endpoint 查询字段(包含 matched_gf_patterns
# Endpoint 查询字段
# ⚠️ 注意t.tech 和 t.matched_gf_patterns 从原表获取,因为 pg_ivm 不支持 ArrayField
ENDPOINT_SELECT_FIELDS = """
id,
url,
host,
title,
tech,
status_code,
response_headers,
response_body,
content_type,
content_length,
webserver,
location,
vhost,
matched_gf_patterns,
created_at,
target_id
v.id,
v.url,
v.host,
v.title,
t.tech, -- ArrayField从 endpoint 表 JOIN 获取
v.status_code,
v.response_headers,
v.response_body,
v.content_type,
v.content_length,
v.webserver,
v.location,
v.vhost,
t.matched_gf_patterns, -- ArrayField从 endpoint 表 JOIN 获取
v.created_at,
v.target_id
"""
@@ -119,8 +128,8 @@ class SearchQueryParser:
# 检查是否包含操作符语法,如果不包含则作为 host 模糊搜索
if not cls.CONDITION_PATTERN.search(query):
# 裸文本,默认作为 host 模糊搜索
return "host ILIKE %s", [f"%{query}%"]
# 裸文本,默认作为 host 模糊搜索v 是视图别名)
return "v.host ILIKE %s", [f"%{query}%"]
# 按 || 分割为 OR 组
or_groups = cls._split_by_or(query)
@@ -273,45 +282,45 @@ class SearchQueryParser:
def _build_like_condition(cls, field: str, value: str, is_array: bool) -> Tuple[str, List[Any]]:
"""构建模糊匹配条件"""
if is_array:
# 数组字段:检查数组中是否有元素包含该值
return f"EXISTS (SELECT 1 FROM unnest({field}) AS t WHERE t ILIKE %s)", [f"%{value}%"]
# 数组字段:检查数组中是否有元素包含该值(从原表 t 获取)
return f"EXISTS (SELECT 1 FROM unnest(t.{field}) AS elem WHERE elem ILIKE %s)", [f"%{value}%"]
elif field == 'status_code':
# 状态码是整数,模糊匹配转为精确匹配
try:
return f"{field} = %s", [int(value)]
return f"v.{field} = %s", [int(value)]
except ValueError:
return f"{field}::text ILIKE %s", [f"%{value}%"]
return f"v.{field}::text ILIKE %s", [f"%{value}%"]
else:
return f"{field} ILIKE %s", [f"%{value}%"]
return f"v.{field} ILIKE %s", [f"%{value}%"]
@classmethod
def _build_exact_condition(cls, field: str, value: str, is_array: bool) -> Tuple[str, List[Any]]:
"""构建精确匹配条件"""
if is_array:
# 数组字段:检查数组中是否包含该精确值
return f"%s = ANY({field})", [value]
# 数组字段:检查数组中是否包含该精确值(从原表 t 获取)
return f"%s = ANY(t.{field})", [value]
elif field == 'status_code':
# 状态码是整数
try:
return f"{field} = %s", [int(value)]
return f"v.{field} = %s", [int(value)]
except ValueError:
return f"{field}::text = %s", [value]
return f"v.{field}::text = %s", [value]
else:
return f"{field} = %s", [value]
return f"v.{field} = %s", [value]
@classmethod
def _build_not_equal_condition(cls, field: str, value: str, is_array: bool) -> Tuple[str, List[Any]]:
"""构建不等于条件"""
if is_array:
# 数组字段:检查数组中不包含该值
return f"NOT (%s = ANY({field}))", [value]
# 数组字段:检查数组中不包含该值(从原表 t 获取)
return f"NOT (%s = ANY(t.{field}))", [value]
elif field == 'status_code':
try:
return f"({field} IS NULL OR {field} != %s)", [int(value)]
return f"(v.{field} IS NULL OR v.{field} != %s)", [int(value)]
except ValueError:
return f"({field} IS NULL OR {field}::text != %s)", [value]
return f"(v.{field} IS NULL OR v.{field}::text != %s)", [value]
else:
return f"({field} IS NULL OR {field} != %s)", [value]
return f"(v.{field} IS NULL OR v.{field} != %s)", [value]
AssetType = Literal['website', 'endpoint']
@@ -339,15 +348,18 @@ class AssetSearchService:
"""
where_clause, params = SearchQueryParser.parse(query)
# 根据资产类型选择视图和字段
# 根据资产类型选择视图、原表和字段
view_name = VIEW_MAPPING.get(asset_type, 'asset_search_view')
table_name = TABLE_MAPPING.get(asset_type, 'website')
select_fields = ENDPOINT_SELECT_FIELDS if asset_type == 'endpoint' else WEBSITE_SELECT_FIELDS
# JOIN 原表获取数组字段tech, matched_gf_patterns
sql = f"""
SELECT {select_fields}
FROM {view_name}
FROM {view_name} v
JOIN {table_name} t ON v.id = t.id
WHERE {where_clause}
ORDER BY created_at DESC
ORDER BY v.created_at DESC
"""
# 添加 LIMIT
@@ -369,26 +381,31 @@ class AssetSearchService:
logger.error(f"搜索查询失败: {e}, SQL: {sql}, params: {params}")
raise
def count(self, query: str, asset_type: AssetType = 'website') -> int:
def count(self, query: str, asset_type: AssetType = 'website', statement_timeout_ms: int = 300000) -> int:
"""
统计搜索结果数量
Args:
query: 搜索查询字符串
asset_type: 资产类型 ('website''endpoint')
statement_timeout_ms: SQL 语句超时时间(毫秒),默认 5 分钟
Returns:
int: 结果总数
"""
where_clause, params = SearchQueryParser.parse(query)
# 根据资产类型选择视图
# 根据资产类型选择视图和原表
view_name = VIEW_MAPPING.get(asset_type, 'asset_search_view')
table_name = TABLE_MAPPING.get(asset_type, 'website')
sql = f"SELECT COUNT(*) FROM {view_name} WHERE {where_clause}"
# JOIN 原表以支持数组字段查询
sql = f"SELECT COUNT(*) FROM {view_name} v JOIN {table_name} t ON v.id = t.id WHERE {where_clause}"
try:
with connection.cursor() as cursor:
# 为导出设置更长的超时时间(仅影响当前会话)
cursor.execute(f"SET LOCAL statement_timeout = {statement_timeout_ms}")
cursor.execute(sql, params)
return cursor.fetchone()[0]
except Exception as e:
@@ -399,41 +416,62 @@ class AssetSearchService:
self,
query: str,
asset_type: AssetType = 'website',
batch_size: int = 1000
):
batch_size: int = 1000,
statement_timeout_ms: int = 300000
) -> Iterator[Dict[str, Any]]:
"""
流式搜索资产(使用服务端游标,内存友好)
流式搜索资产(使用分批查询,内存友好)
Args:
query: 搜索查询字符串
asset_type: 资产类型 ('website''endpoint')
batch_size: 每批获取的数量
statement_timeout_ms: SQL 语句超时时间(毫秒),默认 5 分钟
Yields:
Dict: 单条搜索结果
"""
where_clause, params = SearchQueryParser.parse(query)
# 根据资产类型选择视图和字段
# 根据资产类型选择视图、原表和字段
view_name = VIEW_MAPPING.get(asset_type, 'asset_search_view')
table_name = TABLE_MAPPING.get(asset_type, 'website')
select_fields = ENDPOINT_SELECT_FIELDS if asset_type == 'endpoint' else WEBSITE_SELECT_FIELDS
sql = f"""
SELECT {select_fields}
FROM {view_name}
WHERE {where_clause}
ORDER BY created_at DESC
"""
# 使用 OFFSET/LIMIT 分批查询Django 不支持命名游标)
offset = 0
try:
# 使用服务端游标,避免一次性加载所有数据到内存
with connection.cursor(name='export_cursor') as cursor:
cursor.itersize = batch_size
cursor.execute(sql, params)
columns = [col[0] for col in cursor.description]
while True:
# JOIN 原表获取数组字段
sql = f"""
SELECT {select_fields}
FROM {view_name} v
JOIN {table_name} t ON v.id = t.id
WHERE {where_clause}
ORDER BY v.created_at DESC
LIMIT {batch_size} OFFSET {offset}
"""
for row in cursor:
with connection.cursor() as cursor:
# 为导出设置更长的超时时间(仅影响当前会话)
cursor.execute(f"SET LOCAL statement_timeout = {statement_timeout_ms}")
cursor.execute(sql, params)
columns = [col[0] for col in cursor.description]
rows = cursor.fetchall()
if not rows:
break
for row in rows:
yield dict(zip(columns, row))
# 如果返回的行数少于 batch_size说明已经是最后一批
if len(rows) < batch_size:
break
offset += batch_size
except Exception as e:
logger.error(f"流式搜索查询失败: {e}, SQL: {sql}, params: {params}")
raise

View File

@@ -1,7 +0,0 @@
"""
Asset 应用的任务模块
注意:物化视图刷新已移至 APScheduler 定时任务apps.engine.scheduler
"""
__all__ = []

View File

@@ -12,17 +12,22 @@ from .views import (
AssetStatisticsViewSet,
AssetSearchView,
AssetSearchExportView,
EndpointViewSet,
HostPortMappingViewSet,
ScreenshotViewSet,
)
# 创建 DRF 路由器
router = DefaultRouter()
# 注册 ViewSet
# 注意IPAddress 模型已被重构为 HostPortMapping相关路由已移除
router.register(r'subdomains', SubdomainViewSet, basename='subdomain')
router.register(r'websites', WebSiteViewSet, basename='website')
router.register(r'directories', DirectoryViewSet, basename='directory')
router.register(r'endpoints', EndpointViewSet, basename='endpoint')
router.register(r'ip-addresses', HostPortMappingViewSet, basename='ip-address')
router.register(r'vulnerabilities', VulnerabilityViewSet, basename='vulnerability')
router.register(r'screenshots', ScreenshotViewSet, basename='screenshot')
router.register(r'statistics', AssetStatisticsViewSet, basename='asset-statistics')
urlpatterns = [

View File

@@ -18,6 +18,8 @@ from .asset_views import (
EndpointSnapshotViewSet,
HostPortMappingSnapshotViewSet,
VulnerabilitySnapshotViewSet,
ScreenshotViewSet,
ScreenshotSnapshotViewSet,
)
from .search_views import AssetSearchView, AssetSearchExportView
@@ -35,6 +37,8 @@ __all__ = [
'EndpointSnapshotViewSet',
'HostPortMappingSnapshotViewSet',
'VulnerabilitySnapshotViewSet',
'ScreenshotViewSet',
'ScreenshotSnapshotViewSet',
'AssetSearchView',
'AssetSearchExportView',
]

View File

@@ -8,7 +8,6 @@ from rest_framework.request import Request
from rest_framework.exceptions import NotFound, ValidationError as DRFValidationError
from django.core.exceptions import ValidationError, ObjectDoesNotExist
from django.db import DatabaseError, IntegrityError, OperationalError
from django.http import StreamingHttpResponse
from ..serializers import (
SubdomainListSerializer, WebSiteSerializer, DirectorySerializer,
@@ -243,7 +242,7 @@ class SubdomainViewSet(viewsets.ModelViewSet):
CSV 列name, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -254,12 +253,41 @@ class SubdomainViewSet(viewsets.ModelViewSet):
headers = ['name', 'created_at']
formatters = {'created_at': format_datetime}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-subdomains.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-subdomains.csv"'
return response
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request, **kwargs):
"""批量删除子域名
POST /api/assets/subdomains/bulk-delete/
请求体: {"ids": [1, 2, 3]}
响应: {"deletedCount": 3}
"""
ids = request.data.get('ids', [])
if not ids or not isinstance(ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ids is required and must be a list',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
from ..models import Subdomain
deleted_count, _ = Subdomain.objects.filter(id__in=ids).delete()
return success_response(data={'deletedCount': deleted_count})
except Exception as e:
logger.exception("批量删除子域名失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Failed to delete subdomains',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
class WebSiteViewSet(viewsets.ModelViewSet):
@@ -369,7 +397,7 @@ class WebSiteViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -387,12 +415,41 @@ class WebSiteViewSet(viewsets.ModelViewSet):
'tech': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-websites.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-websites.csv"'
return response
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request, **kwargs):
"""批量删除网站
POST /api/assets/websites/bulk-delete/
请求体: {"ids": [1, 2, 3]}
响应: {"deletedCount": 3}
"""
ids = request.data.get('ids', [])
if not ids or not isinstance(ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ids is required and must be a list',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
from ..models import WebSite
deleted_count, _ = WebSite.objects.filter(id__in=ids).delete()
return success_response(data={'deletedCount': deleted_count})
except Exception as e:
logger.exception("批量删除网站失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Failed to delete websites',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
class DirectoryViewSet(viewsets.ModelViewSet):
@@ -499,7 +556,7 @@ class DirectoryViewSet(viewsets.ModelViewSet):
CSV 列url, status, content_length, words, lines, content_type, duration, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -515,12 +572,41 @@ class DirectoryViewSet(viewsets.ModelViewSet):
'created_at': format_datetime,
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-directories.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-directories.csv"'
return response
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request, **kwargs):
"""批量删除目录
POST /api/assets/directories/bulk-delete/
请求体: {"ids": [1, 2, 3]}
响应: {"deletedCount": 3}
"""
ids = request.data.get('ids', [])
if not ids or not isinstance(ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ids is required and must be a list',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
from ..models import Directory
deleted_count, _ = Directory.objects.filter(id__in=ids).delete()
return success_response(data={'deletedCount': deleted_count})
except Exception as e:
logger.exception("批量删除目录失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Failed to delete directories',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
class EndpointViewSet(viewsets.ModelViewSet):
@@ -630,7 +716,7 @@ class EndpointViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, matched_gf_patterns, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -649,12 +735,41 @@ class EndpointViewSet(viewsets.ModelViewSet):
'matched_gf_patterns': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-endpoints.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-endpoints.csv"'
return response
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request, **kwargs):
"""批量删除端点
POST /api/assets/endpoints/bulk-delete/
请求体: {"ids": [1, 2, 3]}
响应: {"deletedCount": 3}
"""
ids = request.data.get('ids', [])
if not ids or not isinstance(ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ids is required and must be a list',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
from ..models import Endpoint
deleted_count, _ = Endpoint.objects.filter(id__in=ids).delete()
return success_response(data={'deletedCount': deleted_count})
except Exception as e:
logger.exception("批量删除端点失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Failed to delete endpoints',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
class HostPortMappingViewSet(viewsets.ModelViewSet):
@@ -707,7 +822,7 @@ class HostPortMappingViewSet(viewsets.ModelViewSet):
CSV 列ip, host, port, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -722,14 +837,44 @@ class HostPortMappingViewSet(viewsets.ModelViewSet):
'created_at': format_datetime
}
# 生成流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-ip-addresses.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-ip-addresses.csv"'
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request, **kwargs):
"""批量删除 IP 地址映射
return response
POST /api/assets/ip-addresses/bulk-delete/
请求体: {"ips": ["192.168.1.1", "10.0.0.1"]}
响应: {"deletedCount": 3}
注意:由于 IP 地址是聚合显示的,删除时传入 IP 列表,
会删除该 IP 下的所有 host:port 映射记录
"""
ips = request.data.get('ips', [])
if not ips or not isinstance(ips, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ips is required and must be a list',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
from ..models import HostPortMapping
deleted_count, _ = HostPortMapping.objects.filter(ip__in=ips).delete()
return success_response(data={'deletedCount': deleted_count})
except Exception as e:
logger.exception("批量删除 IP 地址映射失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Failed to delete ip addresses',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
class VulnerabilityViewSet(viewsets.ModelViewSet):
@@ -801,7 +946,7 @@ class SubdomainSnapshotViewSet(viewsets.ModelViewSet):
CSV 列name, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -812,12 +957,12 @@ class SubdomainSnapshotViewSet(viewsets.ModelViewSet):
headers = ['name', 'created_at']
formatters = {'created_at': format_datetime}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-subdomains.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-subdomains.csv"'
return response
class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
@@ -855,7 +1000,7 @@ class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -873,12 +1018,12 @@ class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
'tech': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-websites.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-websites.csv"'
return response
class DirectorySnapshotViewSet(viewsets.ModelViewSet):
@@ -913,7 +1058,7 @@ class DirectorySnapshotViewSet(viewsets.ModelViewSet):
CSV 列url, status, content_length, words, lines, content_type, duration, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -929,12 +1074,12 @@ class DirectorySnapshotViewSet(viewsets.ModelViewSet):
'created_at': format_datetime,
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-directories.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-directories.csv"'
return response
class EndpointSnapshotViewSet(viewsets.ModelViewSet):
@@ -972,7 +1117,7 @@ class EndpointSnapshotViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, matched_gf_patterns, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -991,12 +1136,12 @@ class EndpointSnapshotViewSet(viewsets.ModelViewSet):
'matched_gf_patterns': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-endpoints.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-endpoints.csv"'
return response
class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
@@ -1031,7 +1176,7 @@ class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
CSV 列ip, host, port, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -1046,14 +1191,12 @@ class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
'created_at': format_datetime
}
# 生成流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-ip-addresses.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-ip-addresses.csv"'
return response
class VulnerabilitySnapshotViewSet(viewsets.ModelViewSet):
@@ -1082,3 +1225,162 @@ class VulnerabilitySnapshotViewSet(viewsets.ModelViewSet):
if scan_pk:
return self.service.get_by_scan(scan_pk, filter_query=filter_query)
return self.service.get_all(filter_query=filter_query)
# ==================== 截图 ViewSet ====================
class ScreenshotViewSet(viewsets.ModelViewSet):
"""截图资产 ViewSet
支持两种访问方式:
1. 嵌套路由GET /api/targets/{target_pk}/screenshots/
2. 独立路由GET /api/screenshots/(全局查询)
支持智能过滤语法filter 参数):
- url="example" URL 模糊匹配
"""
from ..serializers import ScreenshotListSerializer
serializer_class = ScreenshotListSerializer
pagination_class = BasePagination
filter_backends = [filters.OrderingFilter]
ordering = ['-created_at']
def get_queryset(self):
"""根据是否有 target_pk 参数决定查询范围"""
from ..models import Screenshot
target_pk = self.kwargs.get('target_pk')
filter_query = self.request.query_params.get('filter', None)
queryset = Screenshot.objects.all()
if target_pk:
queryset = queryset.filter(target_id=target_pk)
if filter_query:
# 简单的 URL 模糊匹配
queryset = queryset.filter(url__icontains=filter_query)
return queryset.order_by('-created_at')
@action(detail=True, methods=['get'], url_path='image')
def image(self, request, pk=None, **kwargs):
"""获取截图图片
GET /api/assets/screenshots/{id}/image/
返回 WebP 格式的图片二进制数据
"""
from django.http import HttpResponse
from ..models import Screenshot
try:
screenshot = Screenshot.objects.get(pk=pk)
if not screenshot.image:
return error_response(
code=ErrorCodes.NOT_FOUND,
message='Screenshot image not found',
status_code=status.HTTP_404_NOT_FOUND
)
response = HttpResponse(screenshot.image, content_type='image/webp')
response['Content-Disposition'] = f'inline; filename="screenshot_{pk}.webp"'
return response
except Screenshot.DoesNotExist:
return error_response(
code=ErrorCodes.NOT_FOUND,
message='Screenshot not found',
status_code=status.HTTP_404_NOT_FOUND
)
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request, **kwargs):
"""批量删除截图
POST /api/assets/screenshots/bulk-delete/
请求体: {"ids": [1, 2, 3]}
响应: {"deletedCount": 3}
"""
ids = request.data.get('ids', [])
if not ids or not isinstance(ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='ids is required and must be a list',
status_code=status.HTTP_400_BAD_REQUEST
)
try:
from ..models import Screenshot
deleted_count, _ = Screenshot.objects.filter(id__in=ids).delete()
return success_response(data={'deletedCount': deleted_count})
except Exception as e:
logger.exception("批量删除截图失败")
return error_response(
code=ErrorCodes.SERVER_ERROR,
message='Failed to delete screenshots',
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR
)
class ScreenshotSnapshotViewSet(viewsets.ModelViewSet):
"""截图快照 ViewSet - 嵌套路由GET /api/scans/{scan_pk}/screenshots/
支持智能过滤语法filter 参数):
- url="example" URL 模糊匹配
"""
from ..serializers import ScreenshotSnapshotListSerializer
serializer_class = ScreenshotSnapshotListSerializer
pagination_class = BasePagination
filter_backends = [filters.OrderingFilter]
ordering = ['-created_at']
def get_queryset(self):
"""根据 scan_pk 参数查询"""
from ..models import ScreenshotSnapshot
scan_pk = self.kwargs.get('scan_pk')
filter_query = self.request.query_params.get('filter', None)
queryset = ScreenshotSnapshot.objects.all()
if scan_pk:
queryset = queryset.filter(scan_id=scan_pk)
if filter_query:
# 简单的 URL 模糊匹配
queryset = queryset.filter(url__icontains=filter_query)
return queryset.order_by('-created_at')
@action(detail=True, methods=['get'], url_path='image')
def image(self, request, pk=None, **kwargs):
"""获取截图快照图片
GET /api/scans/{scan_pk}/screenshots/{id}/image/
返回 WebP 格式的图片二进制数据
"""
from django.http import HttpResponse
from ..models import ScreenshotSnapshot
try:
screenshot = ScreenshotSnapshot.objects.get(pk=pk)
if not screenshot.image:
return error_response(
code=ErrorCodes.NOT_FOUND,
message='Screenshot image not found',
status_code=status.HTTP_404_NOT_FOUND
)
response = HttpResponse(screenshot.image, content_type='image/webp')
response['Content-Disposition'] = f'inline; filename="screenshot_snapshot_{pk}.webp"'
return response
except ScreenshotSnapshot.DoesNotExist:
return error_response(
code=ErrorCodes.NOT_FOUND,
message='Screenshot snapshot not found',
status_code=status.HTTP_404_NOT_FOUND
)

View File

@@ -33,7 +33,6 @@ from urllib.parse import urlparse, urlunparse
from rest_framework import status
from rest_framework.views import APIView
from rest_framework.request import Request
from django.http import StreamingHttpResponse
from django.db import connection
from apps.common.response_helpers import success_response, error_response
@@ -285,7 +284,7 @@ class AssetSearchExportView(APIView):
asset_type: 资产类型 ('website''endpoint',默认 'website')
Response:
CSV 文件流(使用服务端游标,支持大数据量导出
CSV 文件(带 Content-Length支持浏览器显示下载进度
"""
def __init__(self, **kwargs):
@@ -313,8 +312,8 @@ class AssetSearchExportView(APIView):
return headers, formatters
def get(self, request: Request):
"""导出搜索结果为 CSV流式导出,无数量限制"""
from apps.common.utils import generate_csv_rows
"""导出搜索结果为 CSV带 Content-Length支持下载进度显示"""
from apps.common.utils import create_csv_export_response
# 获取搜索查询
query = request.query_params.get('q', '').strip()
@@ -347,18 +346,16 @@ class AssetSearchExportView(APIView):
# 获取表头和格式化器
headers, formatters = self._get_headers_and_formatters(asset_type)
# 获取流式数据迭代器
data_iterator = self.service.search_iter(query, asset_type)
# 生成文件名
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
filename = f'search_{asset_type}_{timestamp}.csv'
# 返回流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
# 使用通用导出工具
data_iterator = self.service.search_iter(query, asset_type)
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=filename,
field_formatters=formatters,
show_progress=True # 显示下载进度
)
response['Content-Disposition'] = f'attachment; filename="{filename}"'
return response

View File

@@ -0,0 +1,34 @@
# Generated by Django 5.2.7 on 2026-01-06 00:55
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('targets', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='BlacklistRule',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('pattern', models.CharField(help_text='规则模式,如 *.gov, 10.0.0.0/8, 192.168.1.1', max_length=255)),
('rule_type', models.CharField(choices=[('domain', '域名'), ('ip', 'IP地址'), ('cidr', 'CIDR范围'), ('keyword', '关键词')], help_text='规则类型domain, ip, cidr', max_length=20)),
('scope', models.CharField(choices=[('global', '全局规则'), ('target', 'Target规则')], db_index=True, help_text='作用域global 或 target', max_length=20)),
('description', models.CharField(blank=True, default='', help_text='规则描述', max_length=500)),
('created_at', models.DateTimeField(auto_now_add=True)),
('target', models.ForeignKey(blank=True, help_text='关联的 Target仅 scope=target 时有值)', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='blacklist_rules', to='targets.target')),
],
options={
'db_table': 'blacklist_rule',
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['scope', 'rule_type'], name='blacklist_r_scope_6ff77f_idx'), models.Index(fields=['target', 'scope'], name='blacklist_r_target__191441_idx')],
'constraints': [models.UniqueConstraint(fields=('pattern', 'scope', 'target'), name='unique_blacklist_rule')],
},
),
]

View File

@@ -0,0 +1,4 @@
"""Common models"""
from apps.common.models.blacklist import BlacklistRule
__all__ = ['BlacklistRule']

View File

@@ -0,0 +1,71 @@
"""黑名单规则模型"""
from django.db import models
class BlacklistRule(models.Model):
"""黑名单规则模型
用于存储黑名单过滤规则支持域名、IP、CIDR 三种类型。
支持两层作用域:全局规则和 Target 级规则。
"""
class RuleType(models.TextChoices):
DOMAIN = 'domain', '域名'
IP = 'ip', 'IP地址'
CIDR = 'cidr', 'CIDR范围'
KEYWORD = 'keyword', '关键词'
class Scope(models.TextChoices):
GLOBAL = 'global', '全局规则'
TARGET = 'target', 'Target规则'
id = models.AutoField(primary_key=True)
pattern = models.CharField(
max_length=255,
help_text='规则模式,如 *.gov, 10.0.0.0/8, 192.168.1.1'
)
rule_type = models.CharField(
max_length=20,
choices=RuleType.choices,
help_text='规则类型domain, ip, cidr'
)
scope = models.CharField(
max_length=20,
choices=Scope.choices,
db_index=True,
help_text='作用域global 或 target'
)
target = models.ForeignKey(
'targets.Target',
on_delete=models.CASCADE,
null=True,
blank=True,
related_name='blacklist_rules',
help_text='关联的 Target仅 scope=target 时有值)'
)
description = models.CharField(
max_length=500,
blank=True,
default='',
help_text='规则描述'
)
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
db_table = 'blacklist_rule'
indexes = [
models.Index(fields=['scope', 'rule_type']),
models.Index(fields=['target', 'scope']),
]
constraints = [
models.UniqueConstraint(
fields=['pattern', 'scope', 'target'],
name='unique_blacklist_rule'
),
]
ordering = ['-created_at']
def __str__(self):
if self.scope == self.Scope.TARGET and self.target:
return f"[{self.scope}:{self.target_id}] {self.pattern}"
return f"[{self.scope}] {self.pattern}"

View File

@@ -0,0 +1,12 @@
"""Common serializers"""
from .blacklist_serializers import (
BlacklistRuleSerializer,
GlobalBlacklistRuleSerializer,
TargetBlacklistRuleSerializer,
)
__all__ = [
'BlacklistRuleSerializer',
'GlobalBlacklistRuleSerializer',
'TargetBlacklistRuleSerializer',
]

View File

@@ -0,0 +1,68 @@
"""黑名单规则序列化器"""
from rest_framework import serializers
from apps.common.models import BlacklistRule
from apps.common.utils import detect_rule_type
class BlacklistRuleSerializer(serializers.ModelSerializer):
"""黑名单规则序列化器"""
class Meta:
model = BlacklistRule
fields = [
'id',
'pattern',
'rule_type',
'scope',
'target',
'description',
'created_at',
]
read_only_fields = ['id', 'rule_type', 'created_at']
def validate_pattern(self, value):
"""验证规则模式"""
if not value or not value.strip():
raise serializers.ValidationError("规则模式不能为空")
return value.strip()
def create(self, validated_data):
"""创建规则时自动识别规则类型"""
pattern = validated_data.get('pattern', '')
validated_data['rule_type'] = detect_rule_type(pattern)
return super().create(validated_data)
def update(self, instance, validated_data):
"""更新规则时重新识别规则类型"""
if 'pattern' in validated_data:
pattern = validated_data['pattern']
validated_data['rule_type'] = detect_rule_type(pattern)
return super().update(instance, validated_data)
class GlobalBlacklistRuleSerializer(BlacklistRuleSerializer):
"""全局黑名单规则序列化器"""
class Meta(BlacklistRuleSerializer.Meta):
fields = ['id', 'pattern', 'rule_type', 'description', 'created_at']
read_only_fields = ['id', 'rule_type', 'created_at']
def create(self, validated_data):
"""创建全局规则"""
validated_data['scope'] = BlacklistRule.Scope.GLOBAL
validated_data['target'] = None
return super().create(validated_data)
class TargetBlacklistRuleSerializer(BlacklistRuleSerializer):
"""Target 黑名单规则序列化器"""
class Meta(BlacklistRuleSerializer.Meta):
fields = ['id', 'pattern', 'rule_type', 'description', 'created_at']
read_only_fields = ['id', 'rule_type', 'created_at']
def create(self, validated_data):
"""创建 Target 规则target_id 由 view 设置)"""
validated_data['scope'] = BlacklistRule.Scope.TARGET
return super().create(validated_data)

View File

@@ -3,13 +3,16 @@
提供系统级别的公共服务,包括:
- SystemLogService: 系统日志读取服务
- BlacklistService: 黑名单过滤服务
注意FilterService 已移至 apps.common.utils.filter_utils
推荐使用: from apps.common.utils.filter_utils import apply_filters
"""
from .system_log_service import SystemLogService
from .blacklist_service import BlacklistService
__all__ = [
'SystemLogService',
'BlacklistService',
]

View File

@@ -0,0 +1,176 @@
"""
黑名单规则管理服务
负责黑名单规则的 CRUD 操作(数据库层面)。
过滤逻辑请使用 apps.common.utils.BlacklistFilter。
架构说明:
- Model: BlacklistRule (apps.common.models.blacklist)
- Service: BlacklistService (本文件) - 规则 CRUD
- Utils: BlacklistFilter (apps.common.utils.blacklist_filter) - 过滤逻辑
- View: GlobalBlacklistView, TargetViewSet.blacklist
"""
import logging
from typing import List, Dict, Any, Optional
from django.db.models import QuerySet
from apps.common.utils import detect_rule_type
logger = logging.getLogger(__name__)
def _normalize_patterns(patterns: List[str]) -> List[str]:
"""
规范化规则列表:去重 + 过滤空行
Args:
patterns: 原始规则列表
Returns:
List[str]: 去重后的规则列表(保持顺序)
"""
return list(dict.fromkeys(filter(None, (p.strip() for p in patterns))))
class BlacklistService:
"""
黑名单规则管理服务
只负责规则的 CRUD 操作,不包含过滤逻辑。
过滤逻辑请使用 BlacklistFilter 工具类。
"""
def get_global_rules(self) -> QuerySet:
"""
获取全局黑名单规则列表
Returns:
QuerySet: 全局规则查询集
"""
from apps.common.models import BlacklistRule
return BlacklistRule.objects.filter(scope=BlacklistRule.Scope.GLOBAL)
def get_target_rules(self, target_id: int) -> QuerySet:
"""
获取 Target 级黑名单规则列表
Args:
target_id: Target ID
Returns:
QuerySet: Target 级规则查询集
"""
from apps.common.models import BlacklistRule
return BlacklistRule.objects.filter(
scope=BlacklistRule.Scope.TARGET,
target_id=target_id
)
def get_rules(self, target_id: Optional[int] = None) -> List:
"""
获取黑名单规则(全局 + Target 级)
Args:
target_id: Target ID用于加载 Target 级规则
Returns:
List[BlacklistRule]: 规则列表
"""
from apps.common.models import BlacklistRule
# 加载全局规则
rules = list(BlacklistRule.objects.filter(scope=BlacklistRule.Scope.GLOBAL))
# 加载 Target 级规则
if target_id:
target_rules = BlacklistRule.objects.filter(
scope=BlacklistRule.Scope.TARGET,
target_id=target_id
)
rules.extend(target_rules)
return rules
def replace_global_rules(self, patterns: List[str]) -> Dict[str, Any]:
"""
全量替换全局黑名单规则PUT 语义)
Args:
patterns: 新的规则模式列表
Returns:
Dict: {'count': int} 最终规则数量
"""
from apps.common.models import BlacklistRule
count = self._replace_rules(
patterns=patterns,
scope=BlacklistRule.Scope.GLOBAL,
target=None
)
logger.info("全量替换全局黑名单规则: %d", count)
return {'count': count}
def replace_target_rules(self, target, patterns: List[str]) -> Dict[str, Any]:
"""
全量替换 Target 级黑名单规则PUT 语义)
Args:
target: Target 对象
patterns: 新的规则模式列表
Returns:
Dict: {'count': int} 最终规则数量
"""
from apps.common.models import BlacklistRule
count = self._replace_rules(
patterns=patterns,
scope=BlacklistRule.Scope.TARGET,
target=target
)
logger.info("全量替换 Target 黑名单规则: %d 条 (Target: %s)", count, target.name)
return {'count': count}
def _replace_rules(self, patterns: List[str], scope: str, target=None) -> int:
"""
内部方法:全量替换规则
Args:
patterns: 规则模式列表
scope: 规则作用域 (GLOBAL/TARGET)
target: Target 对象(仅 TARGET 作用域需要)
Returns:
int: 最终规则数量
"""
from apps.common.models import BlacklistRule
from django.db import transaction
patterns = _normalize_patterns(patterns)
with transaction.atomic():
# 1. 删除旧规则
delete_filter = {'scope': scope}
if target:
delete_filter['target'] = target
BlacklistRule.objects.filter(**delete_filter).delete()
# 2. 创建新规则
if patterns:
rules = [
BlacklistRule(
pattern=pattern,
rule_type=detect_rule_type(pattern),
scope=scope,
target=target
)
for pattern in patterns
]
BlacklistRule.objects.bulk_create(rules)
return len(patterns)

View File

@@ -2,13 +2,19 @@
通用模块 URL 配置
路由说明:
- /api/health/ 健康检查接口(无需认证)
- /api/auth/* 认证相关接口(登录、登出、用户信息)
- /api/system/* 系统管理接口(日志查看等)
- /api/health/ 健康检查接口(无需认证)
- /api/auth/* 认证相关接口(登录、登出、用户信息)
- /api/system/* 系统管理接口(日志查看等)
- /api/blacklist/* 黑名单管理接口
"""
from django.urls import path
from .views import LoginView, LogoutView, MeView, ChangePasswordView, SystemLogsView, SystemLogFilesView, HealthCheckView
from .views import (
LoginView, LogoutView, MeView, ChangePasswordView,
SystemLogsView, SystemLogFilesView, HealthCheckView,
GlobalBlacklistView,
)
urlpatterns = [
# 健康检查(无需认证)
@@ -23,4 +29,7 @@ urlpatterns = [
# 系统管理
path('system/logs/', SystemLogsView.as_view(), name='system-logs'),
path('system/logs/files/', SystemLogFilesView.as_view(), name='system-log-files'),
# 黑名单管理PUT 全量替换模式)
path('blacklist/rules/', GlobalBlacklistView.as_view(), name='blacklist-rules'),
]

View File

@@ -11,8 +11,14 @@ from .csv_utils import (
generate_csv_rows,
format_list_field,
format_datetime,
create_csv_export_response,
UTF8_BOM,
)
from .blacklist_filter import (
BlacklistFilter,
detect_rule_type,
extract_host,
)
__all__ = [
'deduplicate_for_bulk',
@@ -24,5 +30,9 @@ __all__ = [
'generate_csv_rows',
'format_list_field',
'format_datetime',
'create_csv_export_response',
'UTF8_BOM',
'BlacklistFilter',
'detect_rule_type',
'extract_host',
]

View File

@@ -0,0 +1,246 @@
"""
黑名单过滤工具
提供域名、IP、CIDR、关键词的黑名单匹配功能。
纯工具类,不涉及数据库操作。
支持的规则类型:
1. 域名精确匹配: example.com
- 规则: example.com
- 匹配: example.com
- 不匹配: sub.example.com, other.com
2. 域名后缀匹配: *.example.com
- 规则: *.example.com
- 匹配: sub.example.com, a.b.example.com, example.com
- 不匹配: other.com, example.com.cn
3. 关键词匹配: *cdn*
- 规则: *cdn*
- 匹配: cdn.example.com, a.cdn.b.com, mycdn123.com
- 不匹配: example.com (不包含 cdn)
4. IP 精确匹配: 192.168.1.1
- 规则: 192.168.1.1
- 匹配: 192.168.1.1
- 不匹配: 192.168.1.2
5. CIDR 范围匹配: 192.168.0.0/24
- 规则: 192.168.0.0/24
- 匹配: 192.168.0.1, 192.168.0.255
- 不匹配: 192.168.1.1
使用方式:
from apps.common.utils import BlacklistFilter
# 创建过滤器(传入规则列表)
rules = BlacklistRule.objects.filter(...)
filter = BlacklistFilter(rules)
# 检查单个目标
if filter.is_allowed('http://example.com'):
process(url)
# 流式处理
for url in urls:
if filter.is_allowed(url):
process(url)
"""
import ipaddress
import logging
from typing import List, Optional
from urllib.parse import urlparse
from apps.common.validators import is_valid_ip, validate_cidr
logger = logging.getLogger(__name__)
def detect_rule_type(pattern: str) -> str:
"""
自动识别规则类型
支持的模式:
- 域名精确匹配: example.com
- 域名后缀匹配: *.example.com
- 关键词匹配: *cdn* (匹配包含 cdn 的域名)
- IP 精确匹配: 192.168.1.1
- CIDR 范围: 192.168.0.0/24
Args:
pattern: 规则模式字符串
Returns:
str: 规则类型 ('domain', 'ip', 'cidr', 'keyword')
"""
if not pattern:
return 'domain'
pattern = pattern.strip()
# 检查关键词模式: *keyword* (前后都有星号,中间无点)
if pattern.startswith('*') and pattern.endswith('*') and len(pattern) > 2:
keyword = pattern[1:-1]
# 关键词中不能有点(否则可能是域名模式)
if '.' not in keyword:
return 'keyword'
# 检查 CIDR包含 /
if '/' in pattern:
try:
validate_cidr(pattern)
return 'cidr'
except ValueError:
pass
# 检查 IP去掉通配符前缀后验证
clean_pattern = pattern.lstrip('*').lstrip('.')
if is_valid_ip(clean_pattern):
return 'ip'
# 默认为域名
return 'domain'
def extract_host(target: str) -> str:
"""
从目标字符串中提取主机名
支持:
- 纯域名example.com
- 纯 IP192.168.1.1
- URLhttp://example.com/path
Args:
target: 目标字符串
Returns:
str: 提取的主机名
"""
if not target:
return ''
target = target.strip()
# 如果是 URL提取 hostname
if '://' in target:
try:
parsed = urlparse(target)
return parsed.hostname or target
except Exception:
return target
return target
class BlacklistFilter:
"""
黑名单过滤器
预编译规则,提供高效的匹配功能。
"""
def __init__(self, rules: List):
"""
初始化过滤器
Args:
rules: BlacklistRule 对象列表
"""
from apps.common.models import BlacklistRule
# 预解析:按类型分类 + CIDR 预编译
self._domain_rules = [] # (pattern, is_wildcard, suffix)
self._ip_rules = set() # 精确 IP 用 setO(1) 查找
self._cidr_rules = [] # (pattern, network_obj)
self._keyword_rules = [] # 关键词列表(小写)
# 去重:跨 scope 可能有重复规则
seen_patterns = set()
for rule in rules:
if rule.pattern in seen_patterns:
continue
seen_patterns.add(rule.pattern)
if rule.rule_type == BlacklistRule.RuleType.DOMAIN:
pattern = rule.pattern.lower()
if pattern.startswith('*.'):
self._domain_rules.append((pattern, True, pattern[1:]))
else:
self._domain_rules.append((pattern, False, None))
elif rule.rule_type == BlacklistRule.RuleType.IP:
self._ip_rules.add(rule.pattern)
elif rule.rule_type == BlacklistRule.RuleType.CIDR:
try:
network = ipaddress.ip_network(rule.pattern, strict=False)
self._cidr_rules.append((rule.pattern, network))
except ValueError:
pass
elif rule.rule_type == BlacklistRule.RuleType.KEYWORD:
# *cdn* -> cdn
keyword = rule.pattern[1:-1].lower()
self._keyword_rules.append(keyword)
def is_allowed(self, target: str) -> bool:
"""
检查目标是否通过过滤
Args:
target: 要检查的目标(域名/IP/URL
Returns:
bool: True 表示通过不在黑名单False 表示被过滤
"""
if not target:
return True
host = extract_host(target)
if not host:
return True
# 先判断输入类型,再走对应分支
if is_valid_ip(host):
return self._check_ip_rules(host)
else:
return self._check_domain_rules(host)
def _check_domain_rules(self, host: str) -> bool:
"""检查域名规则(精确匹配 + 后缀匹配 + 关键词匹配)"""
host_lower = host.lower()
# 1. 域名规则(精确 + 后缀)
for pattern, is_wildcard, suffix in self._domain_rules:
if is_wildcard:
if host_lower.endswith(suffix) or host_lower == pattern[2:]:
return False
else:
if host_lower == pattern:
return False
# 2. 关键词匹配(字符串 in 操作O(n*m)
for keyword in self._keyword_rules:
if keyword in host_lower:
return False
return True
def _check_ip_rules(self, host: str) -> bool:
"""检查 IP 规则(精确匹配 + CIDR"""
# 1. IP 精确匹配O(1)
if host in self._ip_rules:
return False
# 2. CIDR 匹配
if self._cidr_rules:
try:
ip_obj = ipaddress.ip_address(host)
for _, network in self._cidr_rules:
if ip_obj in network:
return False
except ValueError:
pass
return True

View File

@@ -4,13 +4,21 @@
- UTF-8 BOMExcel 兼容)
- RFC 4180 规范转义
- 流式生成(内存友好)
- 带 Content-Length 的文件响应(支持浏览器下载进度显示)
"""
import csv
import io
import os
import tempfile
import logging
from datetime import datetime
from typing import Iterator, Dict, Any, List, Callable, Optional
from django.http import FileResponse, StreamingHttpResponse
logger = logging.getLogger(__name__)
# UTF-8 BOM确保 Excel 正确识别编码
UTF8_BOM = '\ufeff'
@@ -114,3 +122,123 @@ def format_datetime(dt: Optional[datetime]) -> str:
dt = timezone.localtime(dt)
return dt.strftime('%Y-%m-%d %H:%M:%S')
def create_csv_export_response(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
filename: str,
field_formatters: Optional[Dict[str, Callable]] = None,
show_progress: bool = True
) -> FileResponse | StreamingHttpResponse:
"""
创建 CSV 导出响应
根据 show_progress 参数选择响应类型:
- True: 使用临时文件 + FileResponse带 Content-Length浏览器显示下载进度
- False: 使用 StreamingHttpResponse内存更友好但无下载进度
Args:
data_iterator: 数据迭代器,每个元素是一个字典
headers: CSV 表头列表
filename: 下载文件名(如 "export_2024.csv"
field_formatters: 字段格式化函数字典
show_progress: 是否显示下载进度(默认 True
Returns:
FileResponse 或 StreamingHttpResponse
Example:
>>> data_iter = service.iter_data()
>>> headers = ['url', 'host', 'created_at']
>>> formatters = {'created_at': format_datetime}
>>> response = create_csv_export_response(
... data_iter, headers, 'websites.csv', formatters
... )
>>> return response
"""
if show_progress:
return _create_file_response(data_iterator, headers, filename, field_formatters)
else:
return _create_streaming_response(data_iterator, headers, filename, field_formatters)
def _create_file_response(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
filename: str,
field_formatters: Optional[Dict[str, Callable]] = None
) -> FileResponse:
"""
创建带 Content-Length 的文件响应(支持浏览器下载进度)
实现方式:先写入临时文件,再返回 FileResponse
"""
# 创建临时文件
temp_file = tempfile.NamedTemporaryFile(
mode='w',
suffix='.csv',
delete=False,
encoding='utf-8'
)
temp_path = temp_file.name
try:
# 流式写入 CSV 数据到临时文件
for row in generate_csv_rows(data_iterator, headers, field_formatters):
temp_file.write(row)
temp_file.close()
# 获取文件大小
file_size = os.path.getsize(temp_path)
# 创建文件响应
response = FileResponse(
open(temp_path, 'rb'),
content_type='text/csv; charset=utf-8',
as_attachment=True,
filename=filename
)
response['Content-Length'] = file_size
# 设置清理回调:响应完成后删除临时文件
original_close = response.file_to_stream.close
def close_and_cleanup():
original_close()
try:
os.unlink(temp_path)
except OSError:
pass
response.file_to_stream.close = close_and_cleanup
return response
except Exception as e:
# 清理临时文件
try:
temp_file.close()
except:
pass
try:
os.unlink(temp_path)
except OSError:
pass
logger.error(f"创建 CSV 导出响应失败: {e}")
raise
def _create_streaming_response(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
filename: str,
field_formatters: Optional[Dict[str, Callable]] = None
) -> StreamingHttpResponse:
"""
创建流式响应(无 Content-Length内存更友好
"""
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, field_formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="{filename}"'
return response

View File

@@ -5,14 +5,17 @@
- 健康检查视图Docker 健康检查
- 认证相关视图:登录、登出、用户信息、修改密码
- 系统日志视图:实时日志查看
- 黑名单视图:全局黑名单规则管理
"""
from .health_views import HealthCheckView
from .auth_views import LoginView, LogoutView, MeView, ChangePasswordView
from .system_log_views import SystemLogsView, SystemLogFilesView
from .blacklist_views import GlobalBlacklistView
__all__ = [
'HealthCheckView',
'LoginView', 'LogoutView', 'MeView', 'ChangePasswordView',
'SystemLogsView', 'SystemLogFilesView',
'GlobalBlacklistView',
]

View File

@@ -0,0 +1,80 @@
"""全局黑名单 API 视图"""
import logging
from rest_framework import status
from rest_framework.views import APIView
from rest_framework.permissions import IsAuthenticated
from apps.common.response_helpers import success_response, error_response
from apps.common.services import BlacklistService
logger = logging.getLogger(__name__)
class GlobalBlacklistView(APIView):
"""
全局黑名单规则 API
Endpoints:
- GET /api/blacklist/rules/ - 获取全局黑名单列表
- PUT /api/blacklist/rules/ - 全量替换规则(文本框保存场景)
设计说明:
- 使用 PUT 全量替换模式,适合"文本框每行一个规则"的前端场景
- 用户编辑文本框 -> 点击保存 -> 后端全量替换
架构MVS 模式
- View: 参数验证、响应格式化
- Service: 业务逻辑BlacklistService
- Model: 数据持久化BlacklistRule
"""
permission_classes = [IsAuthenticated]
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.blacklist_service = BlacklistService()
def get(self, request):
"""
获取全局黑名单规则列表
返回格式:
{
"patterns": ["*.gov", "*.edu", "10.0.0.0/8"]
}
"""
rules = self.blacklist_service.get_global_rules()
patterns = list(rules.values_list('pattern', flat=True))
return success_response(data={'patterns': patterns})
def put(self, request):
"""
全量替换全局黑名单规则
请求格式:
{
"patterns": ["*.gov", "*.edu", "10.0.0.0/8"]
}
或者空数组清空所有规则:
{
"patterns": []
}
"""
patterns = request.data.get('patterns', [])
# 兼容字符串输入(换行分隔)
if isinstance(patterns, str):
patterns = [p for p in patterns.split('\n') if p.strip()]
if not isinstance(patterns, list):
return error_response(
code='VALIDATION_ERROR',
message='patterns 必须是数组'
)
# 调用 Service 层全量替换
result = self.blacklist_service.replace_global_rules(patterns)
return success_response(data=result)

View File

@@ -1,4 +1,4 @@
# Generated by Django 5.2.7 on 2026-01-02 04:45
# Generated by Django 5.2.7 on 2026-01-06 00:55
from django.db import migrations, models

View File

@@ -16,10 +16,9 @@ class GobyFingerprintService(BaseFingerprintService):
"""
校验单条 Goby 指纹
校验规则
- name 字段必须存在且非空
- logic 字段必须存在
- rule 字段必须是数组
支持两种格式
1. 标准格式: {"name": "...", "logic": "...", "rule": [...]}
2. JSONL 格式: {"product": "...", "rule": "..."}
Args:
item: 单条指纹数据
@@ -27,25 +26,43 @@ class GobyFingerprintService(BaseFingerprintService):
Returns:
bool: 是否有效
"""
# 标准格式name + logic + rule(数组)
name = item.get('name', '')
logic = item.get('logic', '')
rule = item.get('rule')
return bool(name and str(name).strip()) and bool(logic) and isinstance(rule, list)
if name and item.get('logic') is not None and isinstance(item.get('rule'), list):
return bool(str(name).strip())
# JSONL 格式product + rule(字符串)
product = item.get('product', '')
rule = item.get('rule', '')
return bool(product and str(product).strip() and rule and str(rule).strip())
def to_model_data(self, item: dict) -> dict:
"""
转换 Goby JSON 格式为 Model 字段
支持两种输入格式:
1. 标准格式: {"name": "...", "logic": "...", "rule": [...]}
2. JSONL 格式: {"product": "...", "rule": "..."}
Args:
item: 原始 Goby JSON 数据
Returns:
dict: Model 字段数据
"""
# 标准格式
if 'name' in item and isinstance(item.get('rule'), list):
return {
'name': str(item.get('name', '')).strip(),
'logic': item.get('logic', ''),
'rule': item.get('rule', []),
}
# JSONL 格式:将 rule 字符串转为单元素数组
return {
'name': str(item.get('name', '')).strip(),
'logic': item.get('logic', ''),
'rule': item.get('rule', []),
'name': str(item.get('product', '')).strip(),
'logic': 'or', # JSONL 格式默认 or 逻辑
'rule': [item.get('rule', '')] if item.get('rule') else [],
}
def get_export_data(self) -> list:

View File

@@ -139,7 +139,7 @@ class BaseFingerprintViewSet(viewsets.ModelViewSet):
POST /api/engine/fingerprints/{type}/import_file/
请求格式multipart/form-data
- file: JSON 文件
- file: JSON 文件(支持标准 JSON 和 JSONL 格式)
返回:同 batch_create
"""
@@ -148,9 +148,12 @@ class BaseFingerprintViewSet(viewsets.ModelViewSet):
raise ValidationError('缺少文件')
try:
json_data = json.load(file)
content = file.read().decode('utf-8')
json_data = self._parse_json_content(content)
except json.JSONDecodeError as e:
raise ValidationError(f'无效的 JSON 格式: {e}')
except UnicodeDecodeError as e:
raise ValidationError(f'文件编码错误: {e}')
fingerprints = self.parse_import_data(json_data)
if not fingerprints:
@@ -159,6 +162,41 @@ class BaseFingerprintViewSet(viewsets.ModelViewSet):
result = self.get_service().batch_create_fingerprints(fingerprints)
return success_response(data=result, status_code=status.HTTP_201_CREATED)
def _parse_json_content(self, content: str):
"""
解析 JSON 内容,支持标准 JSON 和 JSONL 格式
Args:
content: 文件内容字符串
Returns:
解析后的数据list 或 dict
"""
content = content.strip()
# 尝试标准 JSON 解析
try:
return json.loads(content)
except json.JSONDecodeError:
pass
# 尝试 JSONL 格式(每行一个 JSON 对象)
lines = content.split('\n')
result = []
for i, line in enumerate(lines):
line = line.strip()
if not line:
continue
try:
result.append(json.loads(line))
except json.JSONDecodeError as e:
raise json.JSONDecodeError(f'{i + 1} 行解析失败: {e.msg}', e.doc, e.pos)
if not result:
raise json.JSONDecodeError('文件为空或格式无效', content, 0)
return result
@action(detail=False, methods=['post'], url_path='bulk-delete')
def bulk_delete(self, request):
"""

View File

@@ -13,12 +13,14 @@ SCAN_TOOLS_BASE_PATH = getattr(settings, 'SCAN_TOOLS_BASE_PATH', '/usr/local/bin
SUBDOMAIN_DISCOVERY_COMMANDS = {
'subfinder': {
# 默认使用所有数据源(更全面,略慢),并始终开启递归
# -all 使用所有数据源
# -recursive 对支持递归的源启用递归枚举(默认开启
'base': "subfinder -d {domain} -all -recursive -o '{output_file}' -silent",
# 使用所有数据源(包括付费源,只要配置了 API key
# -all 使用所有数据源slow 但全面)
# -v 显示详细输出,包括使用的数据源(调试用
# 注意:不要加 -recursive它会排除不支持递归的源如 fofa
'base': "subfinder -d {domain} -all -o '{output_file}' -v",
'optional': {
'threads': '-t {threads}', # 控制并发 goroutine 数
'provider_config': "-pc '{provider_config}'", # Provider 配置文件路径
}
},
@@ -261,11 +263,16 @@ COMMAND_TEMPLATES = {
'directory_scan': DIRECTORY_SCAN_COMMANDS,
'url_fetch': URL_FETCH_COMMANDS,
'vuln_scan': VULN_SCAN_COMMANDS,
'screenshot': {}, # 使用 Python 原生库Playwright无命令模板
}
# ==================== 扫描类型配置 ====================
# 执行阶段定义(按顺序执行)
# Stage 1: 资产发现 - 子域名 → 端口 → 站点探测 → 指纹识别
# Stage 2: URL 收集 - URL 获取 + 目录扫描(并行)
# Stage 3: 截图 - 在 URL 收集完成后执行,捕获更多发现的页面
# Stage 4: 漏洞扫描 - 最后执行
EXECUTION_STAGES = [
{
'mode': 'sequential',
@@ -275,6 +282,10 @@ EXECUTION_STAGES = [
'mode': 'parallel',
'flows': ['url_fetch', 'directory_scan']
},
{
'mode': 'sequential',
'flows': ['screenshot']
},
{
'mode': 'sequential',
'flows': ['vuln_scan']

View File

@@ -101,6 +101,16 @@ directory_scan:
match-codes: 200,201,301,302,401,403 # 匹配的 HTTP 状态码
# rate: 0 # 每秒请求数(默认 0 不限制)
screenshot:
# ==================== 网站截图 ====================
# 使用 Playwright 对网站进行截图,保存为 WebP 格式
# 在 Stage 2 与 url_fetch、directory_scan 并行执行
tools:
playwright:
enabled: true
concurrency: 5 # 并发截图数(默认 5
url_sources: [websites] # URL 来源当前对website截图还可以用 [websites, endpoints]
url_fetch:
# ==================== URL 获取 ====================
tools:

View File

@@ -33,7 +33,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command, ensure_wordlist_local
from apps.scan.utils import config_parser, build_scan_command, ensure_wordlist_local, user_log
logger = logging.getLogger(__name__)
@@ -413,6 +413,7 @@ def _run_scans_concurrently(
logger.info("="*60)
logger.info("使用工具: %s (并发模式, max_workers=%d)", tool_name, max_workers)
logger.info("="*60)
user_log(scan_id, "directory_scan", f"Running {tool_name}")
# 如果配置了 wordlist_name则先确保本地存在对应的字典文件含 hash 校验)
wordlist_name = tool_config.get('wordlist_name')
@@ -467,6 +468,11 @@ def _run_scans_concurrently(
total_tasks = len(scan_params_list)
logger.info("开始分批执行 %d 个扫描任务(每批 %d 个)...", total_tasks, max_workers)
# 进度里程碑跟踪
last_progress_percent = 0
tool_directories = 0
tool_processed = 0
batch_num = 0
for batch_start in range(0, total_tasks, max_workers):
batch_end = min(batch_start + max_workers, total_tasks)
@@ -498,7 +504,9 @@ def _run_scans_concurrently(
result = future.result() # 阻塞等待单个任务完成
directories_found = result.get('created_directories', 0)
total_directories += directories_found
tool_directories += directories_found
processed_sites_count += 1
tool_processed += 1
logger.info(
"✓ [%d/%d] 站点扫描完成: %s - 发现 %d 个目录",
@@ -517,6 +525,19 @@ def _run_scans_concurrently(
"✗ [%d/%d] 站点扫描失败: %s - 错误: %s",
idx, len(sites), site_url, exc
)
# 进度里程碑:每 20% 输出一次
current_progress = int((batch_end / total_tasks) * 100)
if current_progress >= last_progress_percent + 20:
user_log(scan_id, "directory_scan", f"Progress: {batch_end}/{total_tasks} sites scanned")
last_progress_percent = (current_progress // 20) * 20
# 工具完成日志(开发者日志 + 用户日志)
logger.info(
"✓ 工具 %s 执行完成 - 已处理站点: %d/%d, 发现目录: %d",
tool_name, tool_processed, total_tasks, tool_directories
)
user_log(scan_id, "directory_scan", f"{tool_name} completed: found {tool_directories} directories")
# 输出汇总信息
if failed_sites:
@@ -605,6 +626,8 @@ def directory_scan_flow(
"="*60
)
user_log(scan_id, "directory_scan", "Starting directory scan")
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
@@ -625,7 +648,8 @@ def directory_scan_flow(
sites_file, site_count = _export_site_urls(target_id, target_name, directory_scan_dir)
if site_count == 0:
logger.warning("目标下没有站点,跳过目录扫描")
logger.warning("跳过目录扫描:没有站点可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "directory_scan", "Skipped: no sites to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -664,7 +688,9 @@ def directory_scan_flow(
logger.warning("所有站点扫描均失败 - 总站点数: %d, 失败数: %d", site_count, len(failed_sites))
# 不抛出异常,让扫描继续
logger.info("="*60 + "\n✓ 目录扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("✓ 目录扫描完成 - 发现目录: %d", total_directories)
user_log(scan_id, "directory_scan", f"directory_scan completed: found {total_directories} directories")
return {
'success': True,

View File

@@ -29,7 +29,7 @@ from apps.scan.tasks.fingerprint_detect import (
export_urls_for_fingerprint_task,
run_xingfinger_and_stream_update_tech_task,
)
from apps.scan.utils import build_scan_command
from apps.scan.utils import build_scan_command, user_log
from apps.scan.utils.fingerprint_helpers import get_fingerprint_paths
logger = logging.getLogger(__name__)
@@ -168,6 +168,7 @@ def _run_fingerprint_detect(
"开始执行 %s 指纹识别 - URL数: %d, 超时: %ds, 指纹库: %s",
tool_name, url_count, timeout, list(fingerprint_paths.keys())
)
user_log(scan_id, "fingerprint_detect", f"Running {tool_name}: {command}")
# 6. 执行扫描任务
try:
@@ -190,17 +191,21 @@ def _run_fingerprint_detect(
'fingerprint_libs': list(fingerprint_paths.keys())
}
tool_updated = result.get('updated_count', 0)
logger.info(
"✓ 工具 %s 执行完成 - 处理记录: %d, 更新: %d, 未找到: %d",
tool_name,
result.get('processed_records', 0),
result.get('updated_count', 0),
tool_updated,
result.get('not_found_count', 0)
)
user_log(scan_id, "fingerprint_detect", f"{tool_name} completed: identified {tool_updated} fingerprints")
except Exception as exc:
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "fingerprint_detect", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
@@ -272,6 +277,8 @@ def fingerprint_detect_flow(
"="*60
)
user_log(scan_id, "fingerprint_detect", "Starting fingerprint detection")
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
@@ -293,7 +300,8 @@ def fingerprint_detect_flow(
urls_file, url_count = _export_urls(target_id, fingerprint_dir, source)
if url_count == 0:
logger.warning("目标下没有可用的 URL跳过指纹识别")
logger.warning("跳过指纹识别:没有 URL 可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "fingerprint_detect", "Skipped: no URLs to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -332,8 +340,6 @@ def fingerprint_detect_flow(
source=source
)
logger.info("="*60 + "\n✓ 指纹识别完成\n" + "="*60)
# 动态生成已执行的任务列表
executed_tasks = ['export_urls_for_fingerprint']
executed_tasks.extend([f'run_xingfinger ({tool})' for tool in tool_stats.keys()])
@@ -344,6 +350,10 @@ def fingerprint_detect_flow(
total_created = sum(stats['result'].get('created_count', 0) for stats in tool_stats.values())
total_snapshots = sum(stats['result'].get('snapshot_count', 0) for stats in tool_stats.values())
# 记录 Flow 完成
logger.info("✓ 指纹识别完成 - 识别指纹: %d", total_updated)
user_log(scan_id, "fingerprint_detect", f"fingerprint_detect completed: identified {total_updated} fingerprints")
successful_tools = [name for name in enabled_tools.keys()
if name not in [f['tool'] for f in failed_tools]]

View File

@@ -99,15 +99,13 @@ def initiate_scan_flow(
raise ValueError("engine_name is required")
logger.info(
"="*60 + "\n" +
"开始初始化扫描任务\n" +
f" Scan ID: {scan_id}\n" +
f" Target: {target_name}\n" +
f" Engine: {engine_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
logger.info("="*60)
logger.info("开始初始化扫描任务")
logger.info(f"Scan ID: {scan_id}")
logger.info(f"Target: {target_name}")
logger.info(f"Engine: {engine_name}")
logger.info(f"Workspace: {scan_workspace_dir}")
logger.info("="*60)
# ==================== Task 1: 创建 Scan 工作空间 ====================
scan_workspace_path = setup_scan_workspace(scan_workspace_dir)
@@ -115,7 +113,7 @@ def initiate_scan_flow(
# ==================== Task 2: 获取引擎配置 ====================
from apps.scan.models import Scan
scan = Scan.objects.get(id=scan_id)
engine_config = scan.merged_configuration
engine_config = scan.yaml_configuration
# 使用 engine_names 进行显示
display_engine_name = ', '.join(scan.engine_names) if scan.engine_names else engine_name
@@ -126,11 +124,9 @@ def initiate_scan_flow(
# FlowOrchestrator 已经解析了所有工具配置
enabled_tools_by_type = orchestrator.enabled_tools_by_type
logger.info(
f"执行计划生成成功:\n"
f" 扫描类型: {''.join(orchestrator.scan_types)}\n"
f" 总共 {len(orchestrator.scan_types)} 个 Flow"
)
logger.info("执行计划生成成功")
logger.info(f"扫描类型: {''.join(orchestrator.scan_types)}")
logger.info(f"总共 {len(orchestrator.scan_types)} 个 Flow")
# ==================== 初始化阶段进度 ====================
# 在解析完配置后立即初始化,此时已有完整的 scan_types 列表
@@ -209,9 +205,13 @@ def initiate_scan_flow(
for mode, enabled_flows in orchestrator.get_execution_stages():
if mode == 'sequential':
# 顺序执行
logger.info(f"\n{'='*60}\n顺序执行阶段: {', '.join(enabled_flows)}\n{'='*60}")
logger.info("="*60)
logger.info(f"顺序执行阶段: {', '.join(enabled_flows)}")
logger.info("="*60)
for scan_type, flow_func, flow_specific_kwargs in get_valid_flows(enabled_flows):
logger.info(f"\n{'='*60}\n执行 Flow: {scan_type}\n{'='*60}")
logger.info("="*60)
logger.info(f"执行 Flow: {scan_type}")
logger.info("="*60)
try:
result = flow_func(**flow_specific_kwargs)
record_flow_result(scan_type, result=result)
@@ -220,12 +220,16 @@ def initiate_scan_flow(
elif mode == 'parallel':
# 并行执行阶段:通过 Task 包装子 Flow并使用 Prefect TaskRunner 并发运行
logger.info(f"\n{'='*60}\n并行执行阶段: {', '.join(enabled_flows)}\n{'='*60}")
logger.info("="*60)
logger.info(f"并行执行阶段: {', '.join(enabled_flows)}")
logger.info("="*60)
futures = []
# 提交所有并行子 Flow 任务
for scan_type, flow_func, flow_specific_kwargs in get_valid_flows(enabled_flows):
logger.info(f"\n{'='*60}\n提交并行子 Flow 任务: {scan_type}\n{'='*60}")
logger.info("="*60)
logger.info(f"提交并行子 Flow 任务: {scan_type}")
logger.info("="*60)
future = _run_subflow_task.submit(
scan_type=scan_type,
flow_func=flow_func,
@@ -246,12 +250,10 @@ def initiate_scan_flow(
record_flow_result(scan_type, error=e)
# ==================== 完成 ====================
logger.info(
"="*60 + "\n" +
"✓ 扫描任务初始化完成\n" +
f" 执行的 Flow: {', '.join(executed_flows)}\n" +
"="*60
)
logger.info("="*60)
logger.info("✓ 扫描任务初始化完成")
logger.info(f"执行的 Flow: {', '.join(executed_flows)}")
logger.info("="*60)
# ==================== 返回结果 ====================
return {

View File

@@ -20,7 +20,7 @@ from pathlib import Path
from typing import Callable
from prefect import flow
from apps.scan.tasks.port_scan import (
export_scan_targets_task,
export_hosts_task,
run_and_stream_save_ports_task
)
from apps.scan.handlers.scan_flow_handlers import (
@@ -28,7 +28,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command
from apps.scan.utils import config_parser, build_scan_command, user_log
logger = logging.getLogger(__name__)
@@ -157,9 +157,9 @@ def _parse_port_count(tool_config: dict) -> int:
def _export_scan_targets(target_id: int, port_scan_dir: Path) -> tuple[str, int, str]:
def _export_hosts(target_id: int, port_scan_dir: Path) -> tuple[str, int, str]:
"""
导出扫描目标到文件
导出主机列表到文件
根据 Target 类型自动决定导出内容:
- DOMAIN: 从 Subdomain 表导出子域名
@@ -171,31 +171,31 @@ def _export_scan_targets(target_id: int, port_scan_dir: Path) -> tuple[str, int,
port_scan_dir: 端口扫描目录
Returns:
tuple: (targets_file, target_count, target_type)
tuple: (hosts_file, host_count, target_type)
"""
logger.info("Step 1: 导出扫描目标列表")
logger.info("Step 1: 导出主机列表")
targets_file = str(port_scan_dir / 'targets.txt')
export_result = export_scan_targets_task(
hosts_file = str(port_scan_dir / 'hosts.txt')
export_result = export_hosts_task(
target_id=target_id,
output_file=targets_file,
output_file=hosts_file,
batch_size=1000 # 每次读取 1000 条,优化内存占用
)
target_count = export_result['total_count']
host_count = export_result['total_count']
target_type = export_result.get('target_type', 'unknown')
logger.info(
"扫描目标导出完成 - 类型: %s, 文件: %s, 数量: %d",
"主机列表导出完成 - 类型: %s, 文件: %s, 数量: %d",
target_type,
export_result['output_file'],
target_count
host_count
)
if target_count == 0:
logger.warning("目标下没有可扫描的地址,无法执行端口扫描")
if host_count == 0:
logger.warning("目标下没有可扫描的主机,无法执行端口扫描")
return export_result['output_file'], target_count, target_type
return export_result['output_file'], host_count, target_type
def _run_scans_sequentially(
@@ -265,6 +265,7 @@ def _run_scans_sequentially(
# 3. 执行扫描任务
logger.info("开始执行 %s 扫描(超时: %d秒)...", tool_name, config_timeout)
user_log(scan_id, "port_scan", f"Running {tool_name}: {command}")
try:
# 直接调用 task串行执行
@@ -286,26 +287,31 @@ def _run_scans_sequentially(
'result': result,
'timeout': config_timeout
}
processed_records += result.get('processed_records', 0)
tool_records = result.get('processed_records', 0)
processed_records += tool_records
logger.info(
"✓ 工具 %s 流式处理完成 - 记录数: %d",
tool_name, result.get('processed_records', 0)
tool_name, tool_records
)
user_log(scan_id, "port_scan", f"{tool_name} completed: found {tool_records} ports")
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
# 注意:流式处理任务超时时,已解析的数据已保存到数据库
reason = f"执行超时(配置: {config_timeout}秒)"
reason = f"timeout after {config_timeout}s"
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.warning(
"⚠️ 工具 %s 执行超时 - 超时配置: %d\n"
"注意:超时前已解析的端口数据已保存到数据库,但扫描未完全完成。",
tool_name, config_timeout
)
user_log(scan_id, "port_scan", f"{tool_name} failed: {reason}", "error")
except Exception as exc:
# 其他异常
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "port_scan", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
@@ -376,8 +382,8 @@ def port_scan_flow(
'scan_id': int,
'target': str,
'scan_workspace_dir': str,
'domains_file': str,
'domain_count': int,
'hosts_file': str,
'host_count': int,
'processed_records': int,
'executed_tasks': list,
'tool_stats': {
@@ -420,25 +426,28 @@ def port_scan_flow(
"="*60
)
user_log(scan_id, "port_scan", "Starting port scan")
# Step 0: 创建工作目录
from apps.scan.utils import setup_scan_directory
port_scan_dir = setup_scan_directory(scan_workspace_dir, 'port_scan')
# Step 1: 导出扫描目标列表到文件(根据 Target 类型自动决定内容)
targets_file, target_count, target_type = _export_scan_targets(target_id, port_scan_dir)
# Step 1: 导出主机列表到文件(根据 Target 类型自动决定内容)
hosts_file, host_count, target_type = _export_hosts(target_id, port_scan_dir)
if target_count == 0:
logger.warning("目标下没有可扫描的地址,跳过端口扫描")
if host_count == 0:
logger.warning("跳过端口扫描:没有主机可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "port_scan", "Skipped: no hosts to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'targets_file': targets_file,
'target_count': 0,
'hosts_file': hosts_file,
'host_count': 0,
'target_type': target_type,
'processed_records': 0,
'executed_tasks': ['export_scan_targets'],
'executed_tasks': ['export_hosts'],
'tool_stats': {
'total': 0,
'successful': 0,
@@ -460,17 +469,19 @@ def port_scan_flow(
logger.info("Step 3: 串行执行扫描工具")
tool_stats, processed_records, successful_tool_names, failed_tools = _run_scans_sequentially(
enabled_tools=enabled_tools,
domains_file=targets_file, # 现在是 targets_file兼容原参数名
domains_file=hosts_file,
port_scan_dir=port_scan_dir,
scan_id=scan_id,
target_id=target_id,
target_name=target_name
)
logger.info("="*60 + "\n✓ 端口扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("✓ 端口扫描完成 - 发现端口: %d", processed_records)
user_log(scan_id, "port_scan", f"port_scan completed: found {processed_records} ports")
# 动态生成已执行的任务列表
executed_tasks = ['export_scan_targets', 'parse_config']
executed_tasks = ['export_hosts', 'parse_config']
executed_tasks.extend([f'run_and_stream_save_ports ({tool})' for tool in tool_stats.keys()])
return {
@@ -478,8 +489,8 @@ def port_scan_flow(
'scan_id': scan_id,
'target': target_name,
'scan_workspace_dir': scan_workspace_dir,
'targets_file': targets_file,
'target_count': target_count,
'hosts_file': hosts_file,
'host_count': host_count,
'target_type': target_type,
'processed_records': processed_records,
'executed_tasks': executed_tasks,
@@ -488,8 +499,8 @@ def port_scan_flow(
'successful': len(successful_tool_names),
'failed': len(failed_tools),
'successful_tools': successful_tool_names,
'failed_tools': failed_tools, # [{'tool': 'naabu_active', 'reason': '超时'}]
'details': tool_stats # 详细结果(保留向后兼容)
'failed_tools': failed_tools,
'details': tool_stats
}
}

View File

@@ -0,0 +1,202 @@
"""
截图 Flow
负责编排截图的完整流程:
1. 从数据库获取 URL 列表websites 和/或 endpoints
2. 批量截图并保存快照
3. 同步到资产表
"""
# Django 环境初始化
from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
from pathlib import Path
from prefect import flow
from apps.scan.tasks.screenshot import capture_screenshots_task
from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_running,
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import user_log
from apps.scan.services.target_export_service import (
get_urls_with_fallback,
DataSource,
)
logger = logging.getLogger(__name__)
def _parse_screenshot_config(enabled_tools: dict) -> dict:
"""
解析截图配置
Args:
enabled_tools: 启用的工具配置
Returns:
截图配置字典
"""
# 从 enabled_tools 中获取 playwright 配置
playwright_config = enabled_tools.get('playwright', {})
return {
'concurrency': playwright_config.get('concurrency', 5),
'url_sources': playwright_config.get('url_sources', ['websites'])
}
def _map_url_sources_to_data_sources(url_sources: list[str]) -> list[str]:
"""
将配置中的 url_sources 映射为 DataSource 常量
Args:
url_sources: 配置中的来源列表,如 ['websites', 'endpoints']
Returns:
DataSource 常量列表
"""
source_mapping = {
'websites': DataSource.WEBSITE,
'endpoints': DataSource.ENDPOINT,
}
sources = []
for source in url_sources:
if source in source_mapping:
sources.append(source_mapping[source])
else:
logger.warning("未知的 URL 来源: %s,跳过", source)
# 添加默认回退(从 subdomain 构造)
sources.append(DataSource.DEFAULT)
return sources
@flow(
name="screenshot",
log_prints=True,
on_running=[on_scan_flow_running],
on_completion=[on_scan_flow_completed],
on_failure=[on_scan_flow_failed],
)
def screenshot_flow(
scan_id: int,
target_name: str,
target_id: int,
scan_workspace_dir: str,
enabled_tools: dict
) -> dict:
"""
截图 Flow
工作流程:
Step 1: 解析配置
Step 2: 收集 URL 列表
Step 3: 批量截图并保存快照
Step 4: 同步到资产表
Args:
scan_id: 扫描任务 ID
target_name: 目标名称
target_id: 目标 ID
scan_workspace_dir: 扫描工作空间目录
enabled_tools: 启用的工具配置
Returns:
dict: {
'success': bool,
'scan_id': int,
'target': str,
'total_urls': int,
'successful': int,
'failed': int,
'synced': int
}
"""
try:
logger.info(
"="*60 + "\n" +
"开始截图扫描\n" +
f" Scan ID: {scan_id}\n" +
f" Target: {target_name}\n" +
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
user_log(scan_id, "screenshot", "Starting screenshot capture")
# Step 1: 解析配置
config = _parse_screenshot_config(enabled_tools)
concurrency = config['concurrency']
url_sources = config['url_sources']
logger.info("截图配置 - 并发: %d, URL来源: %s", concurrency, url_sources)
# Step 2: 使用统一服务收集 URL带黑名单过滤和回退
data_sources = _map_url_sources_to_data_sources(url_sources)
result = get_urls_with_fallback(target_id, sources=data_sources)
urls = result['urls']
logger.info(
"URL 收集完成 - 来源: %s, 数量: %d, 尝试过: %s",
result['source'], result['total_count'], result['tried_sources']
)
if not urls:
logger.warning("没有可截图的 URL跳过截图任务")
user_log(scan_id, "screenshot", "Skipped: no URLs to capture", "warning")
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'total_urls': 0,
'successful': 0,
'failed': 0,
'synced': 0
}
user_log(scan_id, "screenshot", f"Found {len(urls)} URLs to capture (source: {result['source']})")
# Step 3: 批量截图
logger.info("Step 3: 批量截图 - %d 个 URL", len(urls))
capture_result = capture_screenshots_task(
urls=urls,
scan_id=scan_id,
target_id=target_id,
config={'concurrency': concurrency}
)
# Step 4: 同步到资产表
logger.info("Step 4: 同步截图到资产表")
from apps.asset.services.screenshot_service import ScreenshotService
screenshot_service = ScreenshotService()
synced = screenshot_service.sync_screenshots_to_asset(scan_id, target_id)
logger.info(
"✓ 截图完成 - 总数: %d, 成功: %d, 失败: %d, 同步: %d",
capture_result['total'], capture_result['successful'], capture_result['failed'], synced
)
user_log(
scan_id, "screenshot",
f"Screenshot completed: {capture_result['successful']}/{capture_result['total']} captured, {synced} synced"
)
return {
'success': True,
'scan_id': scan_id,
'target': target_name,
'total_urls': capture_result['total'],
'successful': capture_result['successful'],
'failed': capture_result['failed'],
'synced': synced
}
except Exception as e:
logger.exception("截图 Flow 失败: %s", e)
user_log(scan_id, "screenshot", f"Screenshot failed: {e}", "error")
raise

View File

@@ -17,6 +17,7 @@ from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
import os
import subprocess
import time
from pathlib import Path
from typing import Callable
from prefect import flow
@@ -26,7 +27,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command
from apps.scan.utils import config_parser, build_scan_command, user_log
logger = logging.getLogger(__name__)
@@ -164,12 +165,12 @@ def _run_scans_sequentially(
for tool_name, tool_config in enabled_tools.items():
# 1. 构建完整命令(变量替换)
try:
command_params = {'url_file': urls_file}
command = build_scan_command(
tool_name=tool_name,
scan_type='site_scan',
command_params={
'url_file': urls_file
},
command_params=command_params,
tool_config=tool_config
)
except Exception as e:
@@ -198,6 +199,7 @@ def _run_scans_sequentially(
"开始执行 %s 站点扫描 - URL数: %d, 最终超时: %ds",
tool_name, total_urls, timeout
)
user_log(scan_id, "site_scan", f"Running {tool_name}: {command}")
# 3. 执行扫描任务
try:
@@ -218,29 +220,35 @@ def _run_scans_sequentially(
'result': result,
'timeout': timeout
}
processed_records += result.get('processed_records', 0)
tool_records = result.get('processed_records', 0)
tool_created = result.get('created_websites', 0)
processed_records += tool_records
logger.info(
"✓ 工具 %s 流式处理完成 - 处理记录: %d, 创建站点: %d, 跳过: %d",
tool_name,
result.get('processed_records', 0),
result.get('created_websites', 0),
tool_records,
tool_created,
result.get('skipped_no_subdomain', 0) + result.get('skipped_failed', 0)
)
user_log(scan_id, "site_scan", f"{tool_name} completed: found {tool_created} websites")
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
reason = f"执行超时(配置: {timeout}秒)"
reason = f"timeout after {timeout}s"
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.warning(
"⚠️ 工具 %s 执行超时 - 超时配置: %d\n"
"注意:超时前已解析的站点数据已保存到数据库,但扫描未完全完成。",
tool_name, timeout
)
user_log(scan_id, "site_scan", f"{tool_name} failed: {reason}", "error")
except Exception as exc:
# 其他异常
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "site_scan", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
@@ -379,6 +387,8 @@ def site_scan_flow(
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
user_log(scan_id, "site_scan", "Starting site scan")
# Step 0: 创建工作目录
from apps.scan.utils import setup_scan_directory
site_scan_dir = setup_scan_directory(scan_workspace_dir, 'site_scan')
@@ -389,7 +399,8 @@ def site_scan_flow(
)
if total_urls == 0:
logger.warning("目标下没有可用的站点URL,跳过站点扫描")
logger.warning("跳过站点扫描:没有站点 URL 可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "site_scan", "Skipped: no site URLs to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -432,8 +443,6 @@ def site_scan_flow(
target_name=target_name
)
logger.info("="*60 + "\n✓ 站点扫描完成\n" + "="*60)
# 动态生成已执行的任务列表
executed_tasks = ['export_site_urls', 'parse_config']
executed_tasks.extend([f'run_and_stream_save_websites ({tool})' for tool in tool_stats.keys()])
@@ -443,6 +452,10 @@ def site_scan_flow(
total_skipped_no_subdomain = sum(stats['result'].get('skipped_no_subdomain', 0) for stats in tool_stats.values())
total_skipped_failed = sum(stats['result'].get('skipped_failed', 0) for stats in tool_stats.values())
# 记录 Flow 完成
logger.info("✓ 站点扫描完成 - 创建站点: %d", total_created)
user_log(scan_id, "site_scan", f"site_scan completed: found {total_created} websites")
return {
'success': True,
'scan_id': scan_id,

View File

@@ -30,7 +30,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_wordlist_local
from apps.scan.utils import build_scan_command, ensure_wordlist_local, user_log
from apps.engine.services.wordlist_service import WordlistService
from apps.common.normalizer import normalize_domain
from apps.common.validators import validate_domain
@@ -77,7 +77,9 @@ def _validate_and_normalize_target(target_name: str) -> str:
def _run_scans_parallel(
enabled_tools: dict,
domain_name: str,
result_dir: Path
result_dir: Path,
scan_id: int,
provider_config_path: str = None
) -> tuple[list, list, list]:
"""
并行运行所有启用的子域名扫描工具
@@ -86,6 +88,8 @@ def _run_scans_parallel(
enabled_tools: 启用的工具配置字典 {'tool_name': {'timeout': 600, ...}}
domain_name: 目标域名
result_dir: 结果输出目录
scan_id: 扫描任务 ID用于记录日志
provider_config_path: Provider 配置文件路径(可选,用于 subfinder
Returns:
tuple: (result_files, failed_tools, successful_tool_names)
@@ -110,13 +114,19 @@ def _run_scans_parallel(
# 1.2 构建完整命令(变量替换)
try:
command_params = {
'domain': domain_name, # 对应 {domain}
'output_file': output_file # 对应 {output_file}
}
# 如果是 subfinder 且有 provider_config添加到参数
if tool_name == 'subfinder' and provider_config_path:
command_params['provider_config'] = provider_config_path
command = build_scan_command(
tool_name=tool_name,
scan_type='subdomain_discovery',
command_params={
'domain': domain_name, # 对应 {domain}
'output_file': output_file # 对应 {output_file}
},
command_params=command_params,
tool_config=tool_config
)
except Exception as e:
@@ -137,6 +147,9 @@ def _run_scans_parallel(
f"提交任务 - 工具: {tool_name}, 超时: {timeout}s, 输出: {output_file}"
)
# 记录工具开始执行日志
user_log(scan_id, "subdomain_discovery", f"Running {tool_name}: {command}")
future = run_subdomain_discovery_task.submit(
tool=tool_name,
command=command,
@@ -164,16 +177,19 @@ def _run_scans_parallel(
if result:
result_files.append(result)
logger.info("✓ 扫描工具 %s 执行成功: %s", tool_name, result)
user_log(scan_id, "subdomain_discovery", f"{tool_name} completed")
else:
failure_msg = f"{tool_name}: 未生成结果文件"
failures.append(failure_msg)
failed_tools.append({'tool': tool_name, 'reason': '未生成结果文件'})
logger.warning("⚠️ 扫描工具 %s 未生成结果文件", tool_name)
user_log(scan_id, "subdomain_discovery", f"{tool_name} failed: no output file", "error")
except Exception as e:
failure_msg = f"{tool_name}: {str(e)}"
failures.append(failure_msg)
failed_tools.append({'tool': tool_name, 'reason': str(e)})
logger.warning("⚠️ 扫描工具 %s 执行失败: %s", tool_name, str(e))
user_log(scan_id, "subdomain_discovery", f"{tool_name} failed: {str(e)}", "error")
# 4. 检查是否有成功的工具
if not result_files:
@@ -203,7 +219,8 @@ def _run_single_tool(
tool_config: dict,
command_params: dict,
result_dir: Path,
scan_type: str = 'subdomain_discovery'
scan_type: str = 'subdomain_discovery',
scan_id: int = None
) -> str:
"""
运行单个扫描工具
@@ -214,6 +231,7 @@ def _run_single_tool(
command_params: 命令参数
result_dir: 结果目录
scan_type: 扫描类型
scan_id: 扫描 ID用于记录用户日志
Returns:
str: 输出文件路径,失败返回空字符串
@@ -242,7 +260,9 @@ def _run_single_tool(
if timeout == 'auto':
timeout = 3600
logger.info(f"执行 {tool_name}: timeout={timeout}s")
logger.info(f"执行 {tool_name}: {command}")
if scan_id:
user_log(scan_id, scan_type, f"Running {tool_name}: {command}")
try:
result = run_subdomain_discovery_task(
@@ -401,7 +421,6 @@ def subdomain_discovery_flow(
logger.warning("目标域名无效,跳过子域名发现扫描: %s", e)
return _empty_result(scan_id, target_name, scan_workspace_dir)
# 验证成功后打印日志
logger.info(
"="*60 + "\n" +
"开始子域名发现扫描\n" +
@@ -410,6 +429,7 @@ def subdomain_discovery_flow(
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
user_log(scan_id, "subdomain_discovery", f"Starting subdomain discovery for {domain_name}")
# 解析配置
passive_tools = scan_config.get('passive_tools', {})
@@ -428,24 +448,37 @@ def subdomain_discovery_flow(
failed_tools = []
successful_tool_names = []
# ==================== Stage 1: 被动收集(并行)====================
logger.info("=" * 40)
logger.info("Stage 1: 被动收集(并行)")
logger.info("=" * 40)
# ==================== 生成 Provider 配置文件 ====================
# 为 subfinder 生成第三方数据源配置
provider_config_path = None
try:
from apps.scan.services.subfinder_provider_config_service import SubfinderProviderConfigService
provider_config_service = SubfinderProviderConfigService()
provider_config_path = provider_config_service.generate(str(result_dir))
if provider_config_path:
logger.info(f"Provider 配置文件已生成: {provider_config_path}")
user_log(scan_id, "subdomain_discovery", "Provider config generated for subfinder")
except Exception as e:
logger.warning(f"生成 Provider 配置文件失败: {e}")
# ==================== Stage 1: 被动收集(并行)====================
if enabled_passive_tools:
logger.info("=" * 40)
logger.info("Stage 1: 被动收集(并行)")
logger.info("=" * 40)
logger.info("启用工具: %s", ', '.join(enabled_passive_tools.keys()))
user_log(scan_id, "subdomain_discovery", f"Stage 1: passive collection ({', '.join(enabled_passive_tools.keys())})")
result_files, stage1_failed, stage1_success = _run_scans_parallel(
enabled_tools=enabled_passive_tools,
domain_name=domain_name,
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id,
provider_config_path=provider_config_path
)
all_result_files.extend(result_files)
failed_tools.extend(stage1_failed)
successful_tool_names.extend(stage1_success)
executed_tasks.extend([f'passive ({tool})' for tool in stage1_success])
else:
logger.warning("未启用任何被动收集工具")
# 合并 Stage 1 结果
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
@@ -456,7 +489,6 @@ def subdomain_discovery_flow(
else:
# 创建空文件
Path(current_result).touch()
logger.warning("Stage 1 无结果,创建空文件")
# ==================== Stage 2: 字典爆破(可选)====================
bruteforce_enabled = bruteforce_config.get('enabled', False)
@@ -464,6 +496,7 @@ def subdomain_discovery_flow(
logger.info("=" * 40)
logger.info("Stage 2: 字典爆破")
logger.info("=" * 40)
user_log(scan_id, "subdomain_discovery", "Stage 2: bruteforce")
bruteforce_tool_config = bruteforce_config.get('subdomain_bruteforce', {})
wordlist_name = bruteforce_tool_config.get('wordlist_name', 'dns_wordlist.txt')
@@ -496,22 +529,16 @@ def subdomain_discovery_flow(
**bruteforce_tool_config,
'timeout': timeout_value,
}
logger.info(
"subdomain_bruteforce 使用自动 timeout: %s 秒 (字典行数=%s, 3秒/行)",
timeout_value,
line_count_int,
)
brute_output = str(result_dir / f"subs_brute_{timestamp}.txt")
brute_result = _run_single_tool(
tool_name='subdomain_bruteforce',
tool_config=bruteforce_tool_config,
command_params={
'domain': domain_name,
'wordlist': local_wordlist_path,
'output_file': brute_output
},
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
if brute_result:
@@ -522,11 +549,16 @@ def subdomain_discovery_flow(
)
successful_tool_names.append('subdomain_bruteforce')
executed_tasks.append('bruteforce')
logger.info("✓ subdomain_bruteforce 执行完成")
user_log(scan_id, "subdomain_discovery", "subdomain_bruteforce completed")
else:
failed_tools.append({'tool': 'subdomain_bruteforce', 'reason': '执行失败'})
logger.warning("⚠️ subdomain_bruteforce 执行失败")
user_log(scan_id, "subdomain_discovery", "subdomain_bruteforce failed: execution failed", "error")
except Exception as exc:
logger.warning("字典准备失败,跳过字典爆破: %s", exc)
failed_tools.append({'tool': 'subdomain_bruteforce', 'reason': str(exc)})
logger.warning("字典准备失败,跳过字典爆破: %s", exc)
user_log(scan_id, "subdomain_discovery", f"subdomain_bruteforce failed: {str(exc)}", "error")
# ==================== Stage 3: 变异生成 + 验证(可选)====================
permutation_enabled = permutation_config.get('enabled', False)
@@ -534,6 +566,7 @@ def subdomain_discovery_flow(
logger.info("=" * 40)
logger.info("Stage 3: 变异生成 + 存活验证(流式管道)")
logger.info("=" * 40)
user_log(scan_id, "subdomain_discovery", "Stage 3: permutation + resolve")
permutation_tool_config = permutation_config.get('subdomain_permutation_resolve', {})
@@ -587,20 +620,19 @@ def subdomain_discovery_flow(
'tool': 'subdomain_permutation_resolve',
'reason': f"采样检测到泛解析 (膨胀率 {ratio:.1f}x)"
})
user_log(scan_id, "subdomain_discovery", f"subdomain_permutation_resolve skipped: wildcard detected (ratio {ratio:.1f}x)", "warning")
else:
# === Step 3.2: 采样通过,执行完整变异 ===
logger.info("采样检测通过,执行完整变异...")
permuted_output = str(result_dir / f"subs_permuted_{timestamp}.txt")
permuted_result = _run_single_tool(
tool_name='subdomain_permutation_resolve',
tool_config=permutation_tool_config,
command_params={
'input_file': current_result,
'output_file': permuted_output,
},
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
if permuted_result:
@@ -611,15 +643,21 @@ def subdomain_discovery_flow(
)
successful_tool_names.append('subdomain_permutation_resolve')
executed_tasks.append('permutation')
logger.info("✓ subdomain_permutation_resolve 执行完成")
user_log(scan_id, "subdomain_discovery", "subdomain_permutation_resolve completed")
else:
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': '执行失败'})
logger.warning("⚠️ subdomain_permutation_resolve 执行失败")
user_log(scan_id, "subdomain_discovery", "subdomain_permutation_resolve failed: execution failed", "error")
except subprocess.TimeoutExpired:
logger.warning(f"采样检测超时 ({SAMPLE_TIMEOUT}秒),跳过变异")
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': '采样检测超时'})
logger.warning(f"采样检测超时 ({SAMPLE_TIMEOUT}秒),跳过变异")
user_log(scan_id, "subdomain_discovery", "subdomain_permutation_resolve failed: sample detection timeout", "error")
except Exception as e:
logger.warning(f"采样检测失败: {e},跳过变异")
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': f'采样检测失败: {e}'})
logger.warning(f"采样检测失败: {e},跳过变异")
user_log(scan_id, "subdomain_discovery", f"subdomain_permutation_resolve failed: {str(e)}", "error")
# ==================== Stage 4: DNS 存活验证(可选)====================
# 无论是否启用 Stage 3只要 resolve.enabled 为 true 就会执行,对当前所有候选子域做统一 DNS 验证
@@ -628,6 +666,7 @@ def subdomain_discovery_flow(
logger.info("=" * 40)
logger.info("Stage 4: DNS 存活验证")
logger.info("=" * 40)
user_log(scan_id, "subdomain_discovery", "Stage 4: DNS resolve")
resolve_tool_config = resolve_config.get('subdomain_resolve', {})
@@ -651,30 +690,27 @@ def subdomain_discovery_flow(
**resolve_tool_config,
'timeout': timeout_value,
}
logger.info(
"subdomain_resolve 使用自动 timeout: %s 秒 (候选子域数=%s, 3秒/域名)",
timeout_value,
line_count_int,
)
alive_output = str(result_dir / f"subs_alive_{timestamp}.txt")
alive_result = _run_single_tool(
tool_name='subdomain_resolve',
tool_config=resolve_tool_config,
command_params={
'input_file': current_result,
'output_file': alive_output,
},
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
if alive_result:
current_result = alive_result
successful_tool_names.append('subdomain_resolve')
executed_tasks.append('resolve')
logger.info("✓ subdomain_resolve 执行完成")
user_log(scan_id, "subdomain_discovery", "subdomain_resolve completed")
else:
failed_tools.append({'tool': 'subdomain_resolve', 'reason': '执行失败'})
logger.warning("⚠️ subdomain_resolve 执行失败")
user_log(scan_id, "subdomain_discovery", "subdomain_resolve failed: execution failed", "error")
# ==================== Final: 保存到数据库 ====================
logger.info("=" * 40)
@@ -695,7 +731,11 @@ def subdomain_discovery_flow(
processed_domains = save_result.get('processed_records', 0)
executed_tasks.append('save_domains')
logger.info("="*60 + "\n✓ 子域名发现扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("="*60)
logger.info("✓ 子域名发现扫描完成")
logger.info("="*60)
user_log(scan_id, "subdomain_discovery", f"subdomain_discovery completed: found {processed_domains} subdomains")
return {
'success': True,

View File

@@ -59,6 +59,8 @@ def domain_name_url_fetch_flow(
- IP 和 CIDR 类型会自动跳过waymore 等工具不支持)
- 工具会自动收集 *.target_name 的所有历史 URL无需遍历子域名
"""
from apps.scan.utils import user_log
try:
output_path = Path(output_dir)
output_path.mkdir(parents=True, exist_ok=True)
@@ -145,6 +147,9 @@ def domain_name_url_fetch_flow(
timeout,
)
# 记录工具开始执行日志
user_log(scan_id, "url_fetch", f"Running {tool_name}: {command}")
future = run_url_fetcher_task.submit(
tool_name=tool_name,
command=command,
@@ -163,22 +168,28 @@ def domain_name_url_fetch_flow(
if result and result.get("success"):
result_files.append(result["output_file"])
successful_tools.append(tool_name)
url_count = result.get("url_count", 0)
logger.info(
"✓ 工具 %s 执行成功 - 发现 URL: %d",
tool_name,
result.get("url_count", 0),
url_count,
)
user_log(scan_id, "url_fetch", f"{tool_name} completed: found {url_count} urls")
else:
reason = "未生成结果或无有效 URL"
failed_tools.append(
{
"tool": tool_name,
"reason": "未生成结果或无有效 URL",
"reason": reason,
}
)
logger.warning("⚠️ 工具 %s 未生成有效结果", tool_name)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
except Exception as e:
failed_tools.append({"tool": tool_name, "reason": str(e)})
reason = str(e)
failed_tools.append({"tool": tool_name, "reason": reason})
logger.warning("⚠️ 工具 %s 执行失败: %s", tool_name, e)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
logger.info(
"基于 domain_name 的 URL 获取完成 - 成功工具: %s, 失败工具: %s",

View File

@@ -25,6 +25,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import user_log
from .domain_name_url_fetch_flow import domain_name_url_fetch_flow
from .sites_url_fetch_flow import sites_url_fetch_flow
@@ -291,6 +292,8 @@ def url_fetch_flow(
"="*60
)
user_log(scan_id, "url_fetch", "Starting URL fetch")
# Step 1: 准备工作目录
logger.info("Step 1: 准备工作目录")
from apps.scan.utils import setup_scan_directory
@@ -403,7 +406,9 @@ def url_fetch_flow(
target_id=target_id
)
logger.info("="*60 + "\n✓ URL 获取扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("✓ URL 获取完成 - 保存 endpoints: %d", saved_count)
user_log(scan_id, "url_fetch", f"url_fetch completed: found {saved_count} endpoints")
# 构建已执行的任务列表
executed_tasks = ['setup_directory', 'classify_tools']

View File

@@ -116,7 +116,8 @@ def sites_url_fetch_flow(
tools=enabled_tools,
input_file=sites_file,
input_type="sites_file",
output_dir=output_path
output_dir=output_path,
scan_id=scan_id
)
logger.info(

View File

@@ -152,7 +152,8 @@ def run_tools_parallel(
tools: dict,
input_file: str,
input_type: str,
output_dir: Path
output_dir: Path,
scan_id: int
) -> tuple[list, list, list]:
"""
并行执行工具列表
@@ -162,11 +163,13 @@ def run_tools_parallel(
input_file: 输入文件路径
input_type: 输入类型
output_dir: 输出目录
scan_id: 扫描任务 ID用于记录日志
Returns:
tuple: (result_files, failed_tools, successful_tool_names)
"""
from apps.scan.tasks.url_fetch import run_url_fetcher_task
from apps.scan.utils import user_log
futures: dict[str, object] = {}
failed_tools: list[dict] = []
@@ -192,6 +195,9 @@ def run_tools_parallel(
exec_params["timeout"],
)
# 记录工具开始执行日志
user_log(scan_id, "url_fetch", f"Running {tool_name}: {exec_params['command']}")
# 提交并行任务
future = run_url_fetcher_task.submit(
tool_name=tool_name,
@@ -208,22 +214,28 @@ def run_tools_parallel(
result = future.result()
if result and result['success']:
result_files.append(result['output_file'])
url_count = result['url_count']
logger.info(
"✓ 工具 %s 执行成功 - 发现 URL: %d",
tool_name, result['url_count']
tool_name, url_count
)
user_log(scan_id, "url_fetch", f"{tool_name} completed: found {url_count} urls")
else:
reason = '未生成结果或无有效URL'
failed_tools.append({
'tool': tool_name,
'reason': '未生成结果或无有效URL'
'reason': reason
})
logger.warning("⚠️ 工具 %s 未生成有效结果", tool_name)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
except Exception as e:
reason = str(e)
failed_tools.append({
'tool': tool_name,
'reason': str(e)
'reason': reason
})
logger.warning("⚠️ 工具 %s 执行失败: %s", tool_name, e)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
# 计算成功的工具列表
failed_tool_names = [f['tool'] for f in failed_tools]

View File

@@ -12,7 +12,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_nuclei_templates_local
from apps.scan.utils import build_scan_command, ensure_nuclei_templates_local, user_log
from apps.scan.tasks.vuln_scan import (
export_endpoints_task,
run_vuln_tool_task,
@@ -141,6 +141,7 @@ def endpoints_vuln_scan_flow(
# Dalfox XSS 使用流式任务,一边解析一边保存漏洞结果
if tool_name == "dalfox_xss":
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_and_stream_save_dalfox_vulns_task.submit(
cmd=command,
tool_name=tool_name,
@@ -163,6 +164,7 @@ def endpoints_vuln_scan_flow(
elif tool_name == "nuclei":
# Nuclei 使用流式任务
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_and_stream_save_nuclei_vulns_task.submit(
cmd=command,
tool_name=tool_name,
@@ -185,6 +187,7 @@ def endpoints_vuln_scan_flow(
else:
# 其他工具仍使用非流式执行逻辑
logger.info("开始执行漏洞扫描工具 %s(已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_vuln_tool_task.submit(
tool_name=tool_name,
command=command,
@@ -203,24 +206,34 @@ def endpoints_vuln_scan_flow(
# 统一收集所有工具的执行结果
for tool_name, meta in tool_futures.items():
future = meta["future"]
result = future.result()
try:
result = future.result()
if meta["mode"] == "streaming":
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"processed_records": result.get("processed_records"),
"created_vulns": result.get("created_vulns"),
"command_log_file": meta["log_file"],
}
else:
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"duration": result.get("duration"),
"returncode": result.get("returncode"),
"command_log_file": result.get("command_log_file"),
}
if meta["mode"] == "streaming":
created_vulns = result.get("created_vulns", 0)
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"processed_records": result.get("processed_records"),
"created_vulns": created_vulns,
"command_log_file": meta["log_file"],
}
logger.info("✓ 工具 %s 执行完成 - 漏洞: %d", tool_name, created_vulns)
user_log(scan_id, "vuln_scan", f"{tool_name} completed: found {created_vulns} vulnerabilities")
else:
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"duration": result.get("duration"),
"returncode": result.get("returncode"),
"command_log_file": result.get("command_log_file"),
}
logger.info("✓ 工具 %s 执行完成 - returncode=%s", tool_name, result.get("returncode"))
user_log(scan_id, "vuln_scan", f"{tool_name} completed")
except Exception as e:
reason = str(e)
logger.error("工具 %s 执行失败: %s", tool_name, e, exc_info=True)
user_log(scan_id, "vuln_scan", f"{tool_name} failed: {reason}", "error")
return {
"success": True,

View File

@@ -11,6 +11,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_failed,
)
from apps.scan.configs.command_templates import get_command_template
from apps.scan.utils import user_log
from .endpoints_vuln_scan_flow import endpoints_vuln_scan_flow
@@ -72,6 +73,9 @@ def vuln_scan_flow(
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
logger.info("开始漏洞扫描 - Scan ID: %s, Target: %s", scan_id, target_name)
user_log(scan_id, "vuln_scan", "Starting vulnerability scan")
# Step 1: 分类工具
endpoints_tools, other_tools = _classify_vuln_tools(enabled_tools)
@@ -99,6 +103,14 @@ def vuln_scan_flow(
enabled_tools=endpoints_tools,
)
# 记录 Flow 完成
total_vulns = sum(
r.get("created_vulns", 0)
for r in endpoint_result.get("tool_results", {}).values()
)
logger.info("✓ 漏洞扫描完成 - 新增漏洞: %d", total_vulns)
user_log(scan_id, "vuln_scan", f"vuln_scan completed: found {total_vulns} vulnerabilities")
# 目前只有一个子 Flow直接返回其结果
return endpoint_result

View File

@@ -14,6 +14,7 @@ from prefect import Flow
from prefect.client.schemas import FlowRun, State
from apps.scan.utils.performance import FlowPerformanceTracker
from apps.scan.utils import user_log
logger = logging.getLogger(__name__)
@@ -136,6 +137,7 @@ def on_scan_flow_failed(flow: Flow, flow_run: FlowRun, state: State) -> None:
- 更新阶段进度为 failed
- 发送扫描失败通知
- 记录性能指标(含错误信息)
- 写入 ScanLog 供前端显示
Args:
flow: Prefect Flow 对象
@@ -152,6 +154,11 @@ def on_scan_flow_failed(flow: Flow, flow_run: FlowRun, state: State) -> None:
# 提取错误信息
error_message = str(state.message) if state.message else "未知错误"
# 写入 ScanLog 供前端显示
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
user_log(scan_id, stage, f"Failed: {error_message}", "error")
# 记录性能指标(失败情况)
tracker = _flow_trackers.pop(str(flow_run.id), None)
if tracker:

View File

@@ -1,4 +1,4 @@
# Generated by Django 5.2.7 on 2026-01-02 04:45
# Generated by Django 5.2.7 on 2026-01-06 00:55
import django.contrib.postgres.fields
import django.db.models.deletion
@@ -31,6 +31,20 @@ class Migration(migrations.Migration):
'db_table': 'notification_settings',
},
),
migrations.CreateModel(
name='SubfinderProviderSettings',
fields=[
('id', models.BigAutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('providers', models.JSONField(default=dict, help_text='各 Provider 的 API Key 配置')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'verbose_name': 'Subfinder Provider 配置',
'verbose_name_plural': 'Subfinder Provider 配置',
'db_table': 'subfinder_provider_settings',
},
),
migrations.CreateModel(
name='Notification',
fields=[
@@ -57,7 +71,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(primary_key=True, serialize=False)),
('engine_ids', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), default=list, help_text='引擎 ID 列表', size=None)),
('engine_names', models.JSONField(default=list, help_text='引擎名称列表,如 ["引擎A", "引擎B"]')),
('merged_configuration', models.TextField(default='', help_text='合并后的 YAML 配置')),
('yaml_configuration', models.TextField(default='', help_text='YAML 格式的扫描配置')),
('created_at', models.DateTimeField(auto_now_add=True, help_text='任务创建时间')),
('stopped_at', models.DateTimeField(blank=True, help_text='扫描结束时间', null=True)),
('status', models.CharField(choices=[('cancelled', '已取消'), ('completed', '已完成'), ('failed', '失败'), ('initiated', '初始化'), ('running', '运行中')], db_index=True, default='initiated', help_text='任务状态', max_length=20)),
@@ -87,7 +101,22 @@ class Migration(migrations.Migration):
'verbose_name_plural': '扫描任务',
'db_table': 'scan',
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['-created_at'], name='scan_created_0bb6c7_idx'), models.Index(fields=['target'], name='scan_target__718b9d_idx'), models.Index(fields=['deleted_at', '-created_at'], name='scan_deleted_eb17e8_idx')],
},
),
migrations.CreateModel(
name='ScanLog',
fields=[
('id', models.BigAutoField(primary_key=True, serialize=False)),
('level', models.CharField(choices=[('info', 'Info'), ('warning', 'Warning'), ('error', 'Error')], default='info', help_text='日志级别', max_length=10)),
('content', models.TextField(help_text='日志内容')),
('created_at', models.DateTimeField(auto_now_add=True, db_index=True, help_text='创建时间')),
('scan', models.ForeignKey(help_text='关联的扫描任务', on_delete=django.db.models.deletion.CASCADE, related_name='logs', to='scan.scan')),
],
options={
'verbose_name': '扫描日志',
'verbose_name_plural': '扫描日志',
'db_table': 'scan_log',
'ordering': ['created_at'],
},
),
migrations.CreateModel(
@@ -97,7 +126,7 @@ class Migration(migrations.Migration):
('name', models.CharField(help_text='任务名称', max_length=200)),
('engine_ids', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), default=list, help_text='引擎 ID 列表', size=None)),
('engine_names', models.JSONField(default=list, help_text='引擎名称列表,如 ["引擎A", "引擎B"]')),
('merged_configuration', models.TextField(default='', help_text='合并后的 YAML 配置')),
('yaml_configuration', models.TextField(default='', help_text='YAML 格式的扫描配置')),
('cron_expression', models.CharField(default='0 2 * * *', help_text='Cron 表达式,格式:分 时 日 月 周', max_length=100)),
('is_enabled', models.BooleanField(db_index=True, default=True, help_text='是否启用')),
('run_count', models.IntegerField(default=0, help_text='已执行次数')),
@@ -113,7 +142,34 @@ class Migration(migrations.Migration):
'verbose_name_plural': '定时扫描任务',
'db_table': 'scheduled_scan',
'ordering': ['-created_at'],
'indexes': [models.Index(fields=['-created_at'], name='scheduled_s_created_9b9c2e_idx'), models.Index(fields=['is_enabled', '-created_at'], name='scheduled_s_is_enab_23d660_idx'), models.Index(fields=['name'], name='scheduled_s_name_bf332d_idx')],
},
),
migrations.AddIndex(
model_name='scan',
index=models.Index(fields=['-created_at'], name='scan_created_0bb6c7_idx'),
),
migrations.AddIndex(
model_name='scan',
index=models.Index(fields=['target'], name='scan_target__718b9d_idx'),
),
migrations.AddIndex(
model_name='scan',
index=models.Index(fields=['deleted_at', '-created_at'], name='scan_deleted_eb17e8_idx'),
),
migrations.AddIndex(
model_name='scanlog',
index=models.Index(fields=['scan', 'created_at'], name='scan_log_scan_id_c4814a_idx'),
),
migrations.AddIndex(
model_name='scheduledscan',
index=models.Index(fields=['-created_at'], name='scheduled_s_created_9b9c2e_idx'),
),
migrations.AddIndex(
model_name='scheduledscan',
index=models.Index(fields=['is_enabled', '-created_at'], name='scheduled_s_is_enab_23d660_idx'),
),
migrations.AddIndex(
model_name='scheduledscan',
index=models.Index(fields=['name'], name='scheduled_s_name_bf332d_idx'),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 5.2.7 on 2026-01-07 14:03
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('scan', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='scan',
name='cached_screenshots_count',
field=models.IntegerField(default=0, help_text='缓存的截图数量'),
),
]

View File

@@ -0,0 +1,18 @@
"""Scan Models - 统一导出"""
from .scan_models import Scan, SoftDeleteManager
from .scan_log_model import ScanLog
from .scheduled_scan_model import ScheduledScan
from .subfinder_provider_settings_model import SubfinderProviderSettings
# 兼容旧名称(已废弃,请使用 SubfinderProviderSettings
ProviderSettings = SubfinderProviderSettings
__all__ = [
'Scan',
'ScanLog',
'ScheduledScan',
'SoftDeleteManager',
'SubfinderProviderSettings',
'ProviderSettings', # 兼容旧名称
]

View File

@@ -0,0 +1,41 @@
"""扫描日志模型"""
from django.db import models
class ScanLog(models.Model):
"""扫描日志模型"""
class Level(models.TextChoices):
INFO = 'info', 'Info'
WARNING = 'warning', 'Warning'
ERROR = 'error', 'Error'
id = models.BigAutoField(primary_key=True)
scan = models.ForeignKey(
'Scan',
on_delete=models.CASCADE,
related_name='logs',
db_index=True,
help_text='关联的扫描任务'
)
level = models.CharField(
max_length=10,
choices=Level.choices,
default=Level.INFO,
help_text='日志级别'
)
content = models.TextField(help_text='日志内容')
created_at = models.DateTimeField(auto_now_add=True, db_index=True, help_text='创建时间')
class Meta:
db_table = 'scan_log'
verbose_name = '扫描日志'
verbose_name_plural = '扫描日志'
ordering = ['created_at']
indexes = [
models.Index(fields=['scan', 'created_at']),
]
def __str__(self):
return f"[{self.level}] {self.content[:50]}"

View File

@@ -1,9 +1,9 @@
"""扫描相关模型"""
from django.db import models
from django.contrib.postgres.fields import ArrayField
from ..common.definitions import ScanStatus
from apps.common.definitions import ScanStatus
class SoftDeleteManager(models.Manager):
@@ -30,9 +30,9 @@ class Scan(models.Model):
default=list,
help_text='引擎名称列表,如 ["引擎A", "引擎B"]'
)
merged_configuration = models.TextField(
yaml_configuration = models.TextField(
default='',
help_text='合并后的 YAML 配置'
help_text='YAML 格式的扫描配置'
)
created_at = models.DateTimeField(auto_now_add=True, help_text='任务创建时间')
@@ -84,6 +84,7 @@ class Scan(models.Model):
cached_endpoints_count = models.IntegerField(default=0, help_text='缓存的端点数量')
cached_ips_count = models.IntegerField(default=0, help_text='缓存的IP地址数量')
cached_directories_count = models.IntegerField(default=0, help_text='缓存的目录数量')
cached_screenshots_count = models.IntegerField(default=0, help_text='缓存的截图数量')
cached_vulns_total = models.IntegerField(default=0, help_text='缓存的漏洞总数')
cached_vulns_critical = models.IntegerField(default=0, help_text='缓存的严重漏洞数量')
cached_vulns_high = models.IntegerField(default=0, help_text='缓存的高危漏洞数量')
@@ -97,99 +98,10 @@ class Scan(models.Model):
verbose_name_plural = '扫描任务'
ordering = ['-created_at']
indexes = [
models.Index(fields=['-created_at']), # 优化按创建时间降序排序list 查询的默认排序)
models.Index(fields=['target']), # 优化按目标查询扫描任务
models.Index(fields=['deleted_at', '-created_at']), # 软删除 + 时间索引
models.Index(fields=['-created_at']),
models.Index(fields=['target']),
models.Index(fields=['deleted_at', '-created_at']),
]
def __str__(self):
return f"Scan #{self.id} - {self.target.name}"
class ScheduledScan(models.Model):
"""
定时扫描任务模型
调度机制
- APScheduler 每分钟检查 next_run_time
- 到期任务通过 task_distributor 分发到 Worker 执行
- 支持 cron 表达式进行灵活调度
扫描模式二选一
- 组织扫描设置 organization执行时动态获取组织下所有目标
- 目标扫描设置 target扫描单个目标
- organization 优先级高于 target
"""
id = models.AutoField(primary_key=True)
# 基本信息
name = models.CharField(max_length=200, help_text='任务名称')
# 多引擎支持字段
engine_ids = ArrayField(
models.IntegerField(),
default=list,
help_text='引擎 ID 列表'
)
engine_names = models.JSONField(
default=list,
help_text='引擎名称列表,如 ["引擎A", "引擎B"]'
)
merged_configuration = models.TextField(
default='',
help_text='合并后的 YAML 配置'
)
# 关联的组织(组织扫描模式:执行时动态获取组织下所有目标)
organization = models.ForeignKey(
'targets.Organization',
on_delete=models.CASCADE,
related_name='scheduled_scans',
null=True,
blank=True,
help_text='扫描组织(设置后执行时动态获取组织下所有目标)'
)
# 关联的目标(目标扫描模式:扫描单个目标)
target = models.ForeignKey(
'targets.Target',
on_delete=models.CASCADE,
related_name='scheduled_scans',
null=True,
blank=True,
help_text='扫描单个目标(与 organization 二选一)'
)
# 调度配置 - 直接使用 Cron 表达式
cron_expression = models.CharField(
max_length=100,
default='0 2 * * *',
help_text='Cron 表达式,格式:分 时 日 月 周'
)
# 状态
is_enabled = models.BooleanField(default=True, db_index=True, help_text='是否启用')
# 执行统计
run_count = models.IntegerField(default=0, help_text='已执行次数')
last_run_time = models.DateTimeField(null=True, blank=True, help_text='上次执行时间')
next_run_time = models.DateTimeField(null=True, blank=True, help_text='下次执行时间')
# 时间戳
created_at = models.DateTimeField(auto_now_add=True, help_text='创建时间')
updated_at = models.DateTimeField(auto_now=True, help_text='更新时间')
class Meta:
db_table = 'scheduled_scan'
verbose_name = '定时扫描任务'
verbose_name_plural = '定时扫描任务'
ordering = ['-created_at']
indexes = [
models.Index(fields=['-created_at']),
models.Index(fields=['is_enabled', '-created_at']),
models.Index(fields=['name']), # 优化 name 搜索
]
def __str__(self):
return f"ScheduledScan #{self.id} - {self.name}"

View File

@@ -0,0 +1,73 @@
"""定时扫描任务模型"""
from django.db import models
from django.contrib.postgres.fields import ArrayField
class ScheduledScan(models.Model):
"""定时扫描任务模型"""
id = models.AutoField(primary_key=True)
name = models.CharField(max_length=200, help_text='任务名称')
engine_ids = ArrayField(
models.IntegerField(),
default=list,
help_text='引擎 ID 列表'
)
engine_names = models.JSONField(
default=list,
help_text='引擎名称列表,如 ["引擎A", "引擎B"]'
)
yaml_configuration = models.TextField(
default='',
help_text='YAML 格式的扫描配置'
)
organization = models.ForeignKey(
'targets.Organization',
on_delete=models.CASCADE,
related_name='scheduled_scans',
null=True,
blank=True,
help_text='扫描组织(设置后执行时动态获取组织下所有目标)'
)
target = models.ForeignKey(
'targets.Target',
on_delete=models.CASCADE,
related_name='scheduled_scans',
null=True,
blank=True,
help_text='扫描单个目标(与 organization 二选一)'
)
cron_expression = models.CharField(
max_length=100,
default='0 2 * * *',
help_text='Cron 表达式,格式:分 时 日 月 周'
)
is_enabled = models.BooleanField(default=True, db_index=True, help_text='是否启用')
run_count = models.IntegerField(default=0, help_text='已执行次数')
last_run_time = models.DateTimeField(null=True, blank=True, help_text='上次执行时间')
next_run_time = models.DateTimeField(null=True, blank=True, help_text='下次执行时间')
created_at = models.DateTimeField(auto_now_add=True, help_text='创建时间')
updated_at = models.DateTimeField(auto_now=True, help_text='更新时间')
class Meta:
db_table = 'scheduled_scan'
verbose_name = '定时扫描任务'
verbose_name_plural = '定时扫描任务'
ordering = ['-created_at']
indexes = [
models.Index(fields=['-created_at']),
models.Index(fields=['is_enabled', '-created_at']),
models.Index(fields=['name']),
]
def __str__(self):
return f"ScheduledScan #{self.id} - {self.name}"

View File

@@ -0,0 +1,64 @@
"""Subfinder Provider 配置模型(单例模式)
用于存储 subfinder 第三方数据源的 API Key 配置
"""
from django.db import models
class SubfinderProviderSettings(models.Model):
"""
Subfinder Provider 配置(单例模式)
存储第三方数据源的 API Key 配置,用于 subfinder 子域名发现
支持的 Provider:
- fofa: email + api_key (composite)
- censys: api_id + api_secret (composite)
- hunter, shodan, zoomeye, securitytrails, threatbook, quake: api_key (single)
"""
providers = models.JSONField(
default=dict,
help_text='各 Provider 的 API Key 配置'
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
db_table = 'subfinder_provider_settings'
verbose_name = 'Subfinder Provider 配置'
verbose_name_plural = 'Subfinder Provider 配置'
DEFAULT_PROVIDERS = {
'fofa': {'enabled': False, 'email': '', 'api_key': ''},
'hunter': {'enabled': False, 'api_key': ''},
'shodan': {'enabled': False, 'api_key': ''},
'censys': {'enabled': False, 'api_id': '', 'api_secret': ''},
'zoomeye': {'enabled': False, 'api_key': ''},
'securitytrails': {'enabled': False, 'api_key': ''},
'threatbook': {'enabled': False, 'api_key': ''},
'quake': {'enabled': False, 'api_key': ''},
}
def save(self, *args, **kwargs):
self.pk = 1
super().save(*args, **kwargs)
@classmethod
def get_instance(cls) -> 'SubfinderProviderSettings':
"""获取或创建单例实例"""
obj, _ = cls.objects.get_or_create(
pk=1,
defaults={'providers': cls.DEFAULT_PROVIDERS.copy()}
)
return obj
def get_provider_config(self, provider: str) -> dict:
"""获取指定 Provider 的配置"""
return self.providers.get(provider, self.DEFAULT_PROVIDERS.get(provider, {}))
def is_provider_enabled(self, provider: str) -> bool:
"""检查指定 Provider 是否启用"""
config = self.get_provider_config(provider)
return config.get('enabled', False)

View File

@@ -21,9 +21,6 @@ urlpatterns = [
# 标记全部已读
path('mark-all-as-read/', NotificationMarkAllAsReadView.as_view(), name='mark-all-as-read'),
# 测试通知
path('test/', views.notifications_test, name='test'),
]
# WebSocket 实时通知路由在 routing.py 中定义ws://host/ws/notifications/

View File

@@ -23,45 +23,7 @@ from .services import NotificationService, NotificationSettingsService
logger = logging.getLogger(__name__)
def notifications_test(request):
"""
测试通知推送
"""
try:
from .services import create_notification
from django.http import JsonResponse
level_param = request.GET.get('level', NotificationLevel.LOW)
try:
level_choice = NotificationLevel(level_param)
except ValueError:
level_choice = NotificationLevel.LOW
title = request.GET.get('title') or "测试通知"
message = request.GET.get('message') or "这是一条测试通知消息"
# 创建测试通知
notification = create_notification(
title=title,
message=message,
level=level_choice
)
return JsonResponse({
'success': True,
'message': '测试通知已发送',
'notification_id': notification.id
})
except Exception as e:
logger.error(f"发送测试通知失败: {e}")
return JsonResponse({
'success': False,
'error': str(e)
}, status=500)
# build_api_response 已废弃,请使用 success_response/error_response
def _parse_bool(value: str | None) -> bool | None:

View File

@@ -147,10 +147,10 @@ class FlowOrchestrator:
return True
return False
# 其他扫描类型:检查 tools
# 其他扫描类型(包括 screenshot:检查 tools
tools = scan_config.get('tools', {})
for tool_config in tools.values():
if tool_config.get('enabled', False):
if isinstance(tool_config, dict) and tool_config.get('enabled', False):
return True
return False
@@ -222,6 +222,10 @@ class FlowOrchestrator:
from apps.scan.flows.vuln_scan import vuln_scan_flow
return vuln_scan_flow
elif scan_type == 'screenshot':
from apps.scan.flows.screenshot_flow import screenshot_flow
return screenshot_flow
else:
logger.warning(f"未实现的扫描类型: {scan_type}")
return None

View File

@@ -104,7 +104,7 @@ class DjangoScanRepository:
target: Target,
engine_ids: List[int],
engine_names: List[str],
merged_configuration: str,
yaml_configuration: str,
results_dir: str,
status: ScanStatus = ScanStatus.INITIATED
) -> Scan:
@@ -115,7 +115,7 @@ class DjangoScanRepository:
target: 扫描目标
engine_ids: 引擎 ID 列表
engine_names: 引擎名称列表
merged_configuration: 合并后的 YAML 配置
yaml_configuration: YAML 格式的扫描配置
results_dir: 结果目录
status: 初始状态
@@ -126,7 +126,7 @@ class DjangoScanRepository:
target=target,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration,
yaml_configuration=yaml_configuration,
results_dir=results_dir,
status=status,
container_ids=[]
@@ -464,6 +464,7 @@ class DjangoScanRepository:
'endpoints': scan.endpoint_snapshots.count(),
'ips': ips_count,
'directories': scan.directory_snapshots.count(),
'screenshots': scan.screenshot_snapshots.count(),
'vulns_total': total_vulns,
'vulns_critical': severity_stats['critical'],
'vulns_high': severity_stats['high'],
@@ -478,6 +479,7 @@ class DjangoScanRepository:
'cached_endpoints_count': stats['endpoints'],
'cached_ips_count': stats['ips'],
'cached_directories_count': stats['directories'],
'cached_screenshots_count': stats['screenshots'],
'cached_vulns_total': stats['vulns_total'],
'cached_vulns_critical': stats['vulns_critical'],
'cached_vulns_high': stats['vulns_high'],

View File

@@ -31,7 +31,7 @@ class ScheduledScanDTO:
name: str = ''
engine_ids: List[int] = None # 多引擎支持
engine_names: List[str] = None # 引擎名称列表
merged_configuration: str = '' # 合并后的配置
yaml_configuration: str = '' # YAML 格式的扫描配置
organization_id: Optional[int] = None # 组织扫描模式
target_id: Optional[int] = None # 目标扫描模式
cron_expression: Optional[str] = None
@@ -114,7 +114,7 @@ class DjangoScheduledScanRepository:
name=dto.name,
engine_ids=dto.engine_ids,
engine_names=dto.engine_names,
merged_configuration=dto.merged_configuration,
yaml_configuration=dto.yaml_configuration,
organization_id=dto.organization_id, # 组织扫描模式
target_id=dto.target_id if not dto.organization_id else None, # 目标扫描模式
cron_expression=dto.cron_expression,
@@ -147,8 +147,8 @@ class DjangoScheduledScanRepository:
scheduled_scan.engine_ids = dto.engine_ids
if dto.engine_names is not None:
scheduled_scan.engine_names = dto.engine_names
if dto.merged_configuration is not None:
scheduled_scan.merged_configuration = dto.merged_configuration
if dto.yaml_configuration is not None:
scheduled_scan.yaml_configuration = dto.yaml_configuration
if dto.cron_expression is not None:
scheduled_scan.cron_expression = dto.cron_expression
if dto.is_enabled is not None:

View File

@@ -1,266 +0,0 @@
from rest_framework import serializers
from django.db.models import Count
from .models import Scan, ScheduledScan
class ScanSerializer(serializers.ModelSerializer):
"""扫描任务序列化器"""
target_name = serializers.SerializerMethodField()
class Meta:
model = Scan
fields = [
'id', 'target', 'target_name', 'engine_ids', 'engine_names',
'created_at', 'stopped_at', 'status', 'results_dir',
'container_ids', 'error_message'
]
read_only_fields = [
'id', 'created_at', 'stopped_at', 'results_dir',
'container_ids', 'error_message', 'status'
]
def get_target_name(self, obj):
"""获取目标名称"""
return obj.target.name if obj.target else None
class ScanHistorySerializer(serializers.ModelSerializer):
"""扫描历史列表专用序列化器
为前端扫描历史页面提供优化的数据格式,包括:
- 扫描汇总统计(子域名、端点、漏洞数量)
- 进度百分比和当前阶段
- 执行节点信息
"""
# 字段映射
target_name = serializers.CharField(source='target.name', read_only=True)
worker_name = serializers.CharField(source='worker.name', read_only=True, allow_null=True)
# 计算字段
summary = serializers.SerializerMethodField()
# 进度跟踪字段(直接从模型读取)
progress = serializers.IntegerField(read_only=True)
current_stage = serializers.CharField(read_only=True)
stage_progress = serializers.JSONField(read_only=True)
class Meta:
model = Scan
fields = [
'id', 'target', 'target_name', 'engine_ids', 'engine_names',
'worker_name', 'created_at', 'status', 'error_message', 'summary',
'progress', 'current_stage', 'stage_progress'
]
def get_summary(self, obj):
"""获取扫描汇总数据。
设计原则:
- 子域名/网站/端点/IP/目录使用缓存字段(避免实时 COUNT
- 漏洞统计使用 Scan 上的缓存字段,在扫描结束时统一聚合
"""
# 1. 使用缓存字段构建基础统计子域名、网站、端点、IP、目录
summary = {
'subdomains': obj.cached_subdomains_count or 0,
'websites': obj.cached_websites_count or 0,
'endpoints': obj.cached_endpoints_count or 0,
'ips': obj.cached_ips_count or 0,
'directories': obj.cached_directories_count or 0,
}
# 2. 使用 Scan 模型上的缓存漏洞统计(按严重性聚合)
summary['vulnerabilities'] = {
'total': obj.cached_vulns_total or 0,
'critical': obj.cached_vulns_critical or 0,
'high': obj.cached_vulns_high or 0,
'medium': obj.cached_vulns_medium or 0,
'low': obj.cached_vulns_low or 0,
}
return summary
class QuickScanSerializer(serializers.Serializer):
"""
快速扫描序列化器
功能:
- 接收目标列表和引擎配置
- 自动创建/获取目标
- 立即发起扫描
"""
# 批量创建的最大数量限制
MAX_BATCH_SIZE = 1000
# 目标列表
targets = serializers.ListField(
child=serializers.DictField(),
help_text='目标列表,每个目标包含 name 字段'
)
# 扫描引擎 ID 列表
engine_ids = serializers.ListField(
child=serializers.IntegerField(),
required=True,
help_text='使用的扫描引擎 ID 列表 (必填)'
)
def validate_targets(self, value):
"""验证目标列表"""
if not value:
raise serializers.ValidationError("目标列表不能为空")
# 检查数量限制,防止服务器过载
if len(value) > self.MAX_BATCH_SIZE:
raise serializers.ValidationError(
f"快速扫描最多支持 {self.MAX_BATCH_SIZE} 个目标,当前提交了 {len(value)}"
)
# 验证每个目标的必填字段
for idx, target in enumerate(value):
if 'name' not in target:
raise serializers.ValidationError(f"{idx + 1} 个目标缺少 name 字段")
if not target['name']:
raise serializers.ValidationError(f"{idx + 1} 个目标的 name 不能为空")
return value
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if not value:
raise serializers.ValidationError("engine_ids 不能为空")
return value
# ==================== 定时扫描序列化器 ====================
class ScheduledScanSerializer(serializers.ModelSerializer):
"""定时扫描任务序列化器(用于列表和详情)"""
# 关联字段
organization_id = serializers.IntegerField(source='organization.id', read_only=True, allow_null=True)
organization_name = serializers.CharField(source='organization.name', read_only=True, allow_null=True)
target_id = serializers.IntegerField(source='target.id', read_only=True, allow_null=True)
target_name = serializers.CharField(source='target.name', read_only=True, allow_null=True)
scan_mode = serializers.SerializerMethodField()
class Meta:
model = ScheduledScan
fields = [
'id', 'name',
'engine_ids', 'engine_names',
'organization_id', 'organization_name',
'target_id', 'target_name',
'scan_mode',
'cron_expression',
'is_enabled',
'run_count', 'last_run_time', 'next_run_time',
'created_at', 'updated_at'
]
read_only_fields = [
'id', 'run_count',
'last_run_time', 'next_run_time',
'created_at', 'updated_at'
]
def get_scan_mode(self, obj):
"""获取扫描模式organization 或 target"""
return 'organization' if obj.organization_id else 'target'
class CreateScheduledScanSerializer(serializers.Serializer):
"""创建定时扫描任务序列化器
扫描模式(二选一):
- 组织扫描:提供 organization_id执行时动态获取组织下所有目标
- 目标扫描:提供 target_id扫描单个目标
"""
name = serializers.CharField(max_length=200, help_text='任务名称')
engine_ids = serializers.ListField(
child=serializers.IntegerField(),
help_text='扫描引擎 ID 列表'
)
# 组织扫描模式
organization_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='组织 ID组织扫描模式执行时动态获取组织下所有目标'
)
# 目标扫描模式
target_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='目标 ID目标扫描模式扫描单个目标'
)
cron_expression = serializers.CharField(
max_length=100,
default='0 2 * * *',
help_text='Cron 表达式,格式:分 时 日 月 周'
)
is_enabled = serializers.BooleanField(default=True, help_text='是否立即启用')
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if not value:
raise serializers.ValidationError("engine_ids 不能为空")
return value
def validate(self, data):
"""验证 organization_id 和 target_id 互斥"""
organization_id = data.get('organization_id')
target_id = data.get('target_id')
if not organization_id and not target_id:
raise serializers.ValidationError('必须提供 organization_id 或 target_id 其中之一')
if organization_id and target_id:
raise serializers.ValidationError('organization_id 和 target_id 只能提供其中之一')
return data
class UpdateScheduledScanSerializer(serializers.Serializer):
"""更新定时扫描任务序列化器"""
name = serializers.CharField(max_length=200, required=False, help_text='任务名称')
engine_ids = serializers.ListField(
child=serializers.IntegerField(),
required=False,
help_text='扫描引擎 ID 列表'
)
# 组织扫描模式
organization_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='组织 ID设置后清空 target_id'
)
# 目标扫描模式
target_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='目标 ID设置后清空 organization_id'
)
cron_expression = serializers.CharField(max_length=100, required=False, help_text='Cron 表达式')
is_enabled = serializers.BooleanField(required=False, help_text='是否启用')
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if value is not None and not value:
raise serializers.ValidationError("engine_ids 不能为空")
return value
class ToggleScheduledScanSerializer(serializers.Serializer):
"""切换定时扫描启用状态序列化器"""
is_enabled = serializers.BooleanField(help_text='是否启用')

View File

@@ -0,0 +1,40 @@
"""Scan Serializers - 统一导出"""
from .mixins import ScanConfigValidationMixin
from .scan_serializers import (
ScanSerializer,
ScanHistorySerializer,
QuickScanSerializer,
InitiateScanSerializer,
)
from .scan_log_serializers import ScanLogSerializer
from .scheduled_scan_serializers import (
ScheduledScanSerializer,
CreateScheduledScanSerializer,
UpdateScheduledScanSerializer,
ToggleScheduledScanSerializer,
)
from .subfinder_provider_settings_serializers import SubfinderProviderSettingsSerializer
# 兼容旧名称
ProviderSettingsSerializer = SubfinderProviderSettingsSerializer
__all__ = [
# Mixins
'ScanConfigValidationMixin',
# Scan
'ScanSerializer',
'ScanHistorySerializer',
'QuickScanSerializer',
'InitiateScanSerializer',
# ScanLog
'ScanLogSerializer',
# Scheduled Scan
'ScheduledScanSerializer',
'CreateScheduledScanSerializer',
'UpdateScheduledScanSerializer',
'ToggleScheduledScanSerializer',
# Subfinder Provider Settings
'SubfinderProviderSettingsSerializer',
'ProviderSettingsSerializer', # 兼容旧名称
]

View File

@@ -0,0 +1,57 @@
"""序列化器通用 Mixin 和工具类"""
from rest_framework import serializers
import yaml
class DuplicateKeyLoader(yaml.SafeLoader):
"""自定义 YAML Loader检测重复 key"""
pass
def _check_duplicate_keys(loader, node, deep=False):
"""检测 YAML mapping 中的重复 key"""
mapping = {}
for key_node, value_node in node.value:
key = loader.construct_object(key_node, deep=deep)
if key in mapping:
raise yaml.constructor.ConstructorError(
"while constructing a mapping", node.start_mark,
f"发现重复的配置项 '{key}',后面的配置会覆盖前面的配置,请删除重复项", key_node.start_mark
)
mapping[key] = loader.construct_object(value_node, deep=deep)
return mapping
DuplicateKeyLoader.add_constructor(
yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
_check_duplicate_keys
)
class ScanConfigValidationMixin:
"""扫描配置验证 Mixin"""
def validate_configuration(self, value):
"""验证 YAML 配置格式"""
if not value or not value.strip():
raise serializers.ValidationError("configuration 不能为空")
try:
yaml.load(value, Loader=DuplicateKeyLoader)
except yaml.YAMLError as e:
raise serializers.ValidationError(f"无效的 YAML 格式: {str(e)}")
return value
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if not value:
raise serializers.ValidationError("engine_ids 不能为空,请至少选择一个扫描引擎")
return value
def validate_engine_names(self, value):
"""验证引擎名称列表"""
if not value:
raise serializers.ValidationError("engine_names 不能为空")
return value

View File

@@ -0,0 +1,13 @@
"""扫描日志序列化器"""
from rest_framework import serializers
from ..models import ScanLog
class ScanLogSerializer(serializers.ModelSerializer):
"""扫描日志序列化器"""
class Meta:
model = ScanLog
fields = ['id', 'level', 'content', 'created_at']

View File

@@ -0,0 +1,112 @@
"""扫描任务序列化器"""
from rest_framework import serializers
from ..models import Scan
from .mixins import ScanConfigValidationMixin
class ScanSerializer(serializers.ModelSerializer):
"""扫描任务序列化器"""
target_name = serializers.SerializerMethodField()
class Meta:
model = Scan
fields = [
'id', 'target', 'target_name', 'engine_ids', 'engine_names',
'created_at', 'stopped_at', 'status', 'results_dir',
'container_ids', 'error_message'
]
read_only_fields = [
'id', 'created_at', 'stopped_at', 'results_dir',
'container_ids', 'error_message', 'status'
]
def get_target_name(self, obj):
return obj.target.name if obj.target else None
class ScanHistorySerializer(serializers.ModelSerializer):
"""扫描历史列表序列化器"""
target_name = serializers.CharField(source='target.name', read_only=True)
worker_name = serializers.CharField(source='worker.name', read_only=True, allow_null=True)
summary = serializers.SerializerMethodField()
progress = serializers.IntegerField(read_only=True)
current_stage = serializers.CharField(read_only=True)
stage_progress = serializers.JSONField(read_only=True)
class Meta:
model = Scan
fields = [
'id', 'target', 'target_name', 'engine_ids', 'engine_names',
'worker_name', 'created_at', 'status', 'error_message', 'summary',
'progress', 'current_stage', 'stage_progress', 'yaml_configuration'
]
def get_summary(self, obj):
summary = {
'subdomains': obj.cached_subdomains_count or 0,
'websites': obj.cached_websites_count or 0,
'endpoints': obj.cached_endpoints_count or 0,
'ips': obj.cached_ips_count or 0,
'directories': obj.cached_directories_count or 0,
'screenshots': obj.cached_screenshots_count or 0,
}
summary['vulnerabilities'] = {
'total': obj.cached_vulns_total or 0,
'critical': obj.cached_vulns_critical or 0,
'high': obj.cached_vulns_high or 0,
'medium': obj.cached_vulns_medium or 0,
'low': obj.cached_vulns_low or 0,
}
return summary
class QuickScanSerializer(ScanConfigValidationMixin, serializers.Serializer):
"""快速扫描序列化器"""
MAX_BATCH_SIZE = 5000
targets = serializers.ListField(
child=serializers.DictField(),
help_text='目标列表,每个目标包含 name 字段'
)
configuration = serializers.CharField(required=True, help_text='YAML 格式的扫描配置')
engine_ids = serializers.ListField(child=serializers.IntegerField(), required=True)
engine_names = serializers.ListField(child=serializers.CharField(), required=True)
def validate_targets(self, value):
if not value:
raise serializers.ValidationError("目标列表不能为空")
if len(value) > self.MAX_BATCH_SIZE:
raise serializers.ValidationError(
f"快速扫描最多支持 {self.MAX_BATCH_SIZE} 个目标,当前提交了 {len(value)}"
)
for idx, target in enumerate(value):
if 'name' not in target:
raise serializers.ValidationError(f"{idx + 1} 个目标缺少 name 字段")
if not target['name']:
raise serializers.ValidationError(f"{idx + 1} 个目标的 name 不能为空")
return value
class InitiateScanSerializer(ScanConfigValidationMixin, serializers.Serializer):
"""发起扫描任务序列化器"""
configuration = serializers.CharField(required=True, help_text='YAML 格式的扫描配置')
engine_ids = serializers.ListField(child=serializers.IntegerField(), required=True)
engine_names = serializers.ListField(child=serializers.CharField(), required=True)
organization_id = serializers.IntegerField(required=False, allow_null=True)
target_id = serializers.IntegerField(required=False, allow_null=True)
def validate(self, data):
organization_id = data.get('organization_id')
target_id = data.get('target_id')
if not organization_id and not target_id:
raise serializers.ValidationError('必须提供 organization_id 或 target_id 其中之一')
if organization_id and target_id:
raise serializers.ValidationError('organization_id 和 target_id 只能提供其中之一')
return data

View File

@@ -0,0 +1,84 @@
"""定时扫描序列化器"""
from rest_framework import serializers
from ..models import ScheduledScan
from .mixins import ScanConfigValidationMixin
class ScheduledScanSerializer(serializers.ModelSerializer):
"""定时扫描任务序列化器(用于列表和详情)"""
organization_id = serializers.IntegerField(source='organization.id', read_only=True, allow_null=True)
organization_name = serializers.CharField(source='organization.name', read_only=True, allow_null=True)
target_id = serializers.IntegerField(source='target.id', read_only=True, allow_null=True)
target_name = serializers.CharField(source='target.name', read_only=True, allow_null=True)
scan_mode = serializers.SerializerMethodField()
class Meta:
model = ScheduledScan
fields = [
'id', 'name',
'engine_ids', 'engine_names',
'organization_id', 'organization_name',
'target_id', 'target_name',
'scan_mode',
'cron_expression',
'is_enabled',
'run_count', 'last_run_time', 'next_run_time',
'created_at', 'updated_at'
]
read_only_fields = [
'id', 'run_count',
'last_run_time', 'next_run_time',
'created_at', 'updated_at'
]
def get_scan_mode(self, obj):
return 'organization' if obj.organization_id else 'target'
class CreateScheduledScanSerializer(ScanConfigValidationMixin, serializers.Serializer):
"""创建定时扫描任务序列化器"""
name = serializers.CharField(max_length=200, help_text='任务名称')
configuration = serializers.CharField(required=True, help_text='YAML 格式的扫描配置')
engine_ids = serializers.ListField(child=serializers.IntegerField(), required=True)
engine_names = serializers.ListField(child=serializers.CharField(), required=True)
organization_id = serializers.IntegerField(required=False, allow_null=True)
target_id = serializers.IntegerField(required=False, allow_null=True)
cron_expression = serializers.CharField(max_length=100, default='0 2 * * *')
is_enabled = serializers.BooleanField(default=True)
def validate(self, data):
organization_id = data.get('organization_id')
target_id = data.get('target_id')
if not organization_id and not target_id:
raise serializers.ValidationError('必须提供 organization_id 或 target_id 其中之一')
if organization_id and target_id:
raise serializers.ValidationError('organization_id 和 target_id 只能提供其中之一')
return data
class UpdateScheduledScanSerializer(serializers.Serializer):
"""更新定时扫描任务序列化器"""
name = serializers.CharField(max_length=200, required=False)
engine_ids = serializers.ListField(child=serializers.IntegerField(), required=False)
organization_id = serializers.IntegerField(required=False, allow_null=True)
target_id = serializers.IntegerField(required=False, allow_null=True)
cron_expression = serializers.CharField(max_length=100, required=False)
is_enabled = serializers.BooleanField(required=False)
def validate_engine_ids(self, value):
if value is not None and not value:
raise serializers.ValidationError("engine_ids 不能为空")
return value
class ToggleScheduledScanSerializer(serializers.Serializer):
"""切换定时扫描启用状态序列化器"""
is_enabled = serializers.BooleanField(help_text='是否启用')

View File

@@ -0,0 +1,55 @@
"""Subfinder Provider 配置序列化器"""
from rest_framework import serializers
class SubfinderProviderSettingsSerializer(serializers.Serializer):
"""Subfinder Provider 配置序列化器
支持的 Provider:
- fofa: email + api_key (composite)
- censys: api_id + api_secret (composite)
- hunter, shodan, zoomeye, securitytrails, threatbook, quake: api_key (single)
注意djangorestframework-camel-case 会自动处理 camelCase <-> snake_case 转换
所以这里统一使用 snake_case
"""
VALID_PROVIDERS = {
'fofa', 'hunter', 'shodan', 'censys',
'zoomeye', 'securitytrails', 'threatbook', 'quake'
}
def to_internal_value(self, data):
"""验证并转换输入数据"""
if not isinstance(data, dict):
raise serializers.ValidationError('Expected a dictionary')
result = {}
for provider, config in data.items():
if provider not in self.VALID_PROVIDERS:
continue
if not isinstance(config, dict):
continue
db_config = {'enabled': bool(config.get('enabled', False))}
if provider == 'fofa':
db_config['email'] = str(config.get('email', ''))
db_config['api_key'] = str(config.get('api_key', ''))
elif provider == 'censys':
db_config['api_id'] = str(config.get('api_id', ''))
db_config['api_secret'] = str(config.get('api_secret', ''))
else:
db_config['api_key'] = str(config.get('api_key', ''))
result[provider] = db_config
return result
def to_representation(self, instance):
"""输出数据数据库格式camel-case 中间件会自动转换)"""
if isinstance(instance, dict):
return instance
return instance.providers if hasattr(instance, 'providers') else {}

View File

@@ -17,8 +17,12 @@ from .scan_state_service import ScanStateService
from .scan_control_service import ScanControlService
from .scan_stats_service import ScanStatsService
from .scheduled_scan_service import ScheduledScanService
from .blacklist_service import BlacklistService
from .target_export_service import TargetExportService
from .target_export_service import (
TargetExportService,
create_export_service,
export_urls_with_fallback,
DataSource,
)
__all__ = [
'ScanService', # 主入口(向后兼容)
@@ -27,7 +31,9 @@ __all__ = [
'ScanControlService',
'ScanStatsService',
'ScheduledScanService',
'BlacklistService', # 黑名单过滤服务
'TargetExportService', # 目标导出服务
'create_export_service',
'export_urls_with_fallback',
'DataSource',
]

View File

@@ -1,82 +0,0 @@
"""
黑名单过滤服务
过滤敏感域名(如 .gov、.edu、.mil 等)
当前版本使用默认规则,后续将支持从前端配置加载。
"""
from typing import List, Optional
from django.db.models import QuerySet
import re
import logging
logger = logging.getLogger(__name__)
class BlacklistService:
"""
黑名单过滤服务 - 过滤敏感域名
TODO: 后续版本支持从前端配置加载黑名单规则
- 用户在开始扫描时配置黑名单 URL、域名、IP
- 黑名单规则存储在数据库中,与 Scan 或 Engine 关联
"""
# 默认黑名单正则规则
DEFAULT_PATTERNS = [
r'\.gov$', # .gov 结尾
r'\.gov\.[a-z]{2}$', # .gov.cn, .gov.uk 等
]
def __init__(self, patterns: Optional[List[str]] = None):
"""
初始化黑名单服务
Args:
patterns: 正则表达式列表None 使用默认规则
"""
self.patterns = patterns or self.DEFAULT_PATTERNS
self._compiled_patterns = [re.compile(p) for p in self.patterns]
def filter_queryset(
self,
queryset: QuerySet,
url_field: str = 'url'
) -> QuerySet:
"""
数据库层面过滤 queryset
使用 PostgreSQL 正则表达式排除黑名单 URL
Args:
queryset: 原始 queryset
url_field: URL 字段名
Returns:
QuerySet: 过滤后的 queryset
"""
for pattern in self.patterns:
queryset = queryset.exclude(**{f'{url_field}__regex': pattern})
return queryset
def filter_url(self, url: str) -> bool:
"""
检查单个 URL 是否通过黑名单过滤
Args:
url: 要检查的 URL
Returns:
bool: True 表示通过不在黑名单False 表示被过滤
"""
for pattern in self._compiled_patterns:
if pattern.search(url):
return False
return True
# TODO: 后续版本实现
# @classmethod
# def from_scan(cls, scan_id: int) -> 'BlacklistService':
# """从数据库加载扫描配置的黑名单规则"""
# pass

View File

@@ -282,7 +282,7 @@ class ScanCreationService:
targets: List[Target],
engine_ids: List[int],
engine_names: List[str],
merged_configuration: str,
yaml_configuration: str,
scheduled_scan_name: str | None = None
) -> List[Scan]:
"""
@@ -292,7 +292,7 @@ class ScanCreationService:
targets: 目标列表
engine_ids: 引擎 ID 列表
engine_names: 引擎名称列表
merged_configuration: 合并后的 YAML 配置
yaml_configuration: YAML 格式的扫描配置
scheduled_scan_name: 定时扫描任务名称(可选,用于通知显示)
Returns:
@@ -312,7 +312,7 @@ class ScanCreationService:
target=target,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration,
yaml_configuration=yaml_configuration,
results_dir=scan_workspace_dir,
status=ScanStatus.INITIATED,
container_ids=[],

View File

@@ -117,12 +117,12 @@ class ScanService:
targets: List[Target],
engine_ids: List[int],
engine_names: List[str],
merged_configuration: str,
yaml_configuration: str,
scheduled_scan_name: str | None = None
) -> List[Scan]:
"""批量创建扫描任务(委托给 ScanCreationService"""
return self.creation_service.create_scans(
targets, engine_ids, engine_names, merged_configuration, scheduled_scan_name
targets, engine_ids, engine_names, yaml_configuration, scheduled_scan_name
)
# ==================== 状态管理方法(委托给 ScanStateService ====================

View File

@@ -54,7 +54,7 @@ class ScheduledScanService:
def create(self, dto: ScheduledScanDTO) -> ScheduledScan:
"""
创建定时扫描任务
创建定时扫描任务(使用引擎 ID 合并配置)
流程:
1. 验证参数
@@ -88,7 +88,7 @@ class ScheduledScanService:
# 设置 DTO 的合并配置和引擎名称
dto.engine_names = engine_names
dto.merged_configuration = merged_configuration
dto.yaml_configuration = merged_configuration
# 3. 创建数据库记录
scheduled_scan = self.repo.create(dto)
@@ -107,12 +107,49 @@ class ScheduledScanService:
return scheduled_scan
def _validate_create_dto(self, dto: ScheduledScanDTO) -> None:
"""验证创建 DTO"""
from apps.targets.repositories import DjangoOrganizationRepository
def create_with_configuration(self, dto: ScheduledScanDTO) -> ScheduledScan:
"""
创建定时扫描任务(直接使用前端传递的配置)
if not dto.name:
raise ValidationError('任务名称不能为空')
流程:
1. 验证参数
2. 直接使用 dto.yaml_configuration
3. 创建数据库记录
4. 计算并设置 next_run_time
Args:
dto: 定时扫描 DTO必须包含 yaml_configuration
Returns:
创建的 ScheduledScan 对象
Raises:
ValidationError: 参数验证失败
"""
# 1. 验证参数
self._validate_create_dto_with_configuration(dto)
# 2. 创建数据库记录(直接使用 dto 中的配置)
scheduled_scan = self.repo.create(dto)
# 3. 如果有 cron 表达式且已启用,计算下次执行时间
if scheduled_scan.cron_expression and scheduled_scan.is_enabled:
next_run_time = self._calculate_next_run_time(scheduled_scan)
if next_run_time:
self.repo.update_next_run_time(scheduled_scan.id, next_run_time)
scheduled_scan.next_run_time = next_run_time
logger.info(
"创建定时扫描任务 - ID: %s, 名称: %s, 下次执行: %s",
scheduled_scan.id, scheduled_scan.name, scheduled_scan.next_run_time
)
return scheduled_scan
def _validate_create_dto(self, dto: ScheduledScanDTO) -> None:
"""验证创建 DTO使用引擎 ID"""
# 基础验证
self._validate_base_dto(dto)
if not dto.engine_ids:
raise ValidationError('必须选择扫描引擎')
@@ -121,6 +158,21 @@ class ScheduledScanService:
for engine_id in dto.engine_ids:
if not self.engine_repo.get_by_id(engine_id):
raise ValidationError(f'扫描引擎 ID {engine_id} 不存在')
def _validate_create_dto_with_configuration(self, dto: ScheduledScanDTO) -> None:
"""验证创建 DTO使用前端传递的配置"""
# 基础验证
self._validate_base_dto(dto)
if not dto.yaml_configuration:
raise ValidationError('配置不能为空')
def _validate_base_dto(self, dto: ScheduledScanDTO) -> None:
"""验证 DTO 的基础字段(公共逻辑)"""
from apps.targets.repositories import DjangoOrganizationRepository
if not dto.name:
raise ValidationError('任务名称不能为空')
# 验证扫描模式organization_id 和 target_id 互斥)
if not dto.organization_id and not dto.target_id:
@@ -178,7 +230,7 @@ class ScheduledScanService:
merged_configuration = merge_engine_configs(engines)
dto.engine_names = engine_names
dto.merged_configuration = merged_configuration
dto.yaml_configuration = merged_configuration
# 更新数据库记录
scheduled_scan = self.repo.update(scheduled_scan_id, dto)
@@ -329,7 +381,7 @@ class ScheduledScanService:
立即触发扫描(支持组织扫描和目标扫描两种模式)
复用 ScanService 的逻辑,与 API 调用保持一致。
使用存储的 merged_configuration 而不是重新合并。
使用存储的 yaml_configuration 而不是重新合并。
"""
from apps.scan.services.scan_service import ScanService
@@ -347,7 +399,7 @@ class ScheduledScanService:
targets=targets,
engine_ids=scheduled_scan.engine_ids,
engine_names=scheduled_scan.engine_names,
merged_configuration=scheduled_scan.merged_configuration,
yaml_configuration=scheduled_scan.yaml_configuration,
scheduled_scan_name=scheduled_scan.name
)

View File

@@ -0,0 +1,138 @@
"""Subfinder Provider 配置文件生成服务
负责生成 subfinder 的 provider-config.yaml 配置文件
"""
import logging
import os
from pathlib import Path
from typing import Optional
import yaml
from ..models import SubfinderProviderSettings
logger = logging.getLogger(__name__)
class SubfinderProviderConfigService:
"""Subfinder Provider 配置文件生成服务"""
# Provider 格式定义
PROVIDER_FORMATS = {
'fofa': {'type': 'composite', 'format': '{email}:{api_key}'},
'censys': {'type': 'composite', 'format': '{api_id}:{api_secret}'},
'hunter': {'type': 'single', 'field': 'api_key'},
'shodan': {'type': 'single', 'field': 'api_key'},
'zoomeye': {'type': 'single', 'field': 'api_key'},
'securitytrails': {'type': 'single', 'field': 'api_key'},
'threatbook': {'type': 'single', 'field': 'api_key'},
'quake': {'type': 'single', 'field': 'api_key'},
}
def generate(self, output_dir: str) -> Optional[str]:
"""
生成 provider-config.yaml 文件
Args:
output_dir: 输出目录路径
Returns:
生成的配置文件路径,如果没有启用的 provider 则返回 None
"""
settings = SubfinderProviderSettings.get_instance()
config = {}
has_enabled = False
for provider, format_info in self.PROVIDER_FORMATS.items():
provider_config = settings.providers.get(provider, {})
if not provider_config.get('enabled'):
config[provider] = []
continue
value = self._build_provider_value(provider, provider_config)
if value:
config[provider] = [value] # 单个 key 放入数组
has_enabled = True
logger.debug(f"Provider {provider} 已启用")
else:
config[provider] = []
# 检查是否有任何启用的 provider
if not has_enabled:
logger.info("没有启用的 Provider跳过配置文件生成")
return None
# 确保输出目录存在
output_path = Path(output_dir) / 'provider-config.yaml'
output_path.parent.mkdir(parents=True, exist_ok=True)
# 写入 YAML 文件(使用默认列表格式,和 subfinder 一致)
with open(output_path, 'w', encoding='utf-8') as f:
yaml.dump(config, f, default_flow_style=False, allow_unicode=True)
# 设置文件权限为 600仅所有者可读写
os.chmod(output_path, 0o600)
logger.info(f"Provider 配置文件已生成: {output_path}")
return str(output_path)
def _build_provider_value(self, provider: str, config: dict) -> Optional[str]:
"""根据 provider 格式规则构建配置值
Args:
provider: provider 名称
config: provider 配置字典
Returns:
构建的配置值字符串,如果配置不完整则返回 None
"""
format_info = self.PROVIDER_FORMATS.get(provider)
if not format_info:
return None
if format_info['type'] == 'composite':
# 复合格式:需要多个字段
format_str = format_info['format']
try:
# 提取格式字符串中的字段名
# 例如 '{email}:{api_key}' -> ['email', 'api_key']
import re
fields = re.findall(r'\{(\w+)\}', format_str)
# 检查所有字段是否都有值
values = {}
for field in fields:
value = config.get(field, '').strip()
if not value:
logger.debug(f"Provider {provider} 缺少字段 {field}")
return None
values[field] = value
return format_str.format(**values)
except (KeyError, ValueError) as e:
logger.warning(f"构建 {provider} 配置值失败: {e}")
return None
else:
# 单字段格式
field = format_info['field']
value = config.get(field, '').strip()
if not value:
logger.debug(f"Provider {provider} 缺少字段 {field}")
return None
return value
def cleanup(self, config_path: str) -> None:
"""清理配置文件
Args:
config_path: 配置文件路径
"""
try:
if config_path and Path(config_path).exists():
Path(config_path).unlink()
logger.debug(f"已清理配置文件: {config_path}")
except Exception as e:
logger.warning(f"清理配置文件失败: {config_path} - {e}")

View File

@@ -2,7 +2,9 @@
目标导出服务
提供统一的目标提取和文件导出功能,支持:
- URL 导出(流式写入 + 默认值回退)
- URL 导出(纯导出,不做隐式回退)
- 默认 URL 生成(独立方法)
- 带回退链的 URL 导出(用例层编排)
- 域名/IP 导出(用于端口扫描)
- 黑名单过滤集成
"""
@@ -10,37 +12,326 @@
import ipaddress
import logging
from pathlib import Path
from typing import Dict, Any, Optional, Iterator
from typing import Dict, Any, Optional, List, Iterator, Tuple, Callable
from django.db.models import QuerySet
from .blacklist_service import BlacklistService
from apps.common.utils import BlacklistFilter
logger = logging.getLogger(__name__)
class DataSource:
"""数据源类型常量"""
ENDPOINT = "endpoint"
WEBSITE = "website"
HOST_PORT = "host_port"
DEFAULT = "default"
def create_export_service(target_id: int) -> 'TargetExportService':
"""
工厂函数:创建带黑名单过滤的导出服务
Args:
target_id: 目标 ID用于加载黑名单规则
Returns:
TargetExportService: 配置好黑名单过滤器的导出服务实例
"""
from apps.common.services import BlacklistService
rules = BlacklistService().get_rules(target_id)
blacklist_filter = BlacklistFilter(rules)
return TargetExportService(blacklist_filter=blacklist_filter)
def _iter_default_urls_from_target(
target_id: int,
blacklist_filter: Optional[BlacklistFilter] = None
) -> Iterator[str]:
"""
内部生成器:从 Target 本身生成默认 URL
根据 Target 类型生成 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 http(s)://ip
- URL: 直接使用目标 URL
Args:
target_id: 目标 ID
blacklist_filter: 黑名单过滤器
Yields:
str: URL
"""
from apps.targets.services import TargetService
from apps.targets.models import Target
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return
target_name = target.name
target_type = target.type
# 根据 Target 类型生成 URL
if target_type == Target.TargetType.DOMAIN:
urls = [f"http://{target_name}", f"https://{target_name}"]
elif target_type == Target.TargetType.IP:
urls = [f"http://{target_name}", f"https://{target_name}"]
elif target_type == Target.TargetType.CIDR:
try:
network = ipaddress.ip_network(target_name, strict=False)
urls = []
for ip in network.hosts():
urls.extend([f"http://{ip}", f"https://{ip}"])
# /32 或 /128 特殊处理
if not urls:
ip = str(network.network_address)
urls = [f"http://{ip}", f"https://{ip}"]
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
return
elif target_type == Target.TargetType.URL:
urls = [target_name]
else:
logger.warning("不支持的 Target 类型: %s", target_type)
return
# 过滤并产出
for url in urls:
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
yield url
def _iter_urls_with_fallback(
target_id: int,
sources: List[str],
blacklist_filter: Optional[BlacklistFilter] = None,
batch_size: int = 1000,
tried_sources: Optional[List[str]] = None
) -> Iterator[Tuple[str, str]]:
"""
内部生成器:流式产出 URL带回退链
按 sources 顺序尝试每个数据源,直到有数据返回。
回退逻辑:
- 数据源有数据且通过过滤 → 产出 URL停止回退
- 数据源有数据但全被过滤 → 不回退,停止(避免意外暴露)
- 数据源为空 → 继续尝试下一个
Args:
target_id: 目标 ID
sources: 数据源优先级列表
blacklist_filter: 黑名单过滤器
batch_size: 批次大小
tried_sources: 可选,用于记录尝试过的数据源(外部传入列表,会被修改)
Yields:
Tuple[str, str]: (url, source) - URL 和来源标识
"""
from apps.asset.models import Endpoint, WebSite
for source in sources:
if tried_sources is not None:
tried_sources.append(source)
has_output = False # 是否有输出(通过过滤的)
has_raw_data = False # 是否有原始数据(过滤前)
if source == DataSource.DEFAULT:
# 默认 URL 生成(从 Target 本身构造,复用共用生成器)
for url in _iter_default_urls_from_target(target_id, blacklist_filter):
has_raw_data = True
has_output = True
yield url, source
# 检查是否有原始数据(需要单独判断,因为生成器可能被过滤后为空)
if not has_raw_data:
# 再次检查 Target 是否存在
from apps.targets.services import TargetService
target = TargetService().get_target(target_id)
has_raw_data = target is not None
if has_raw_data:
if not has_output:
logger.info("%s 有数据但全被黑名单过滤,不回退", source)
return
continue
# 构建对应数据源的 queryset
if source == DataSource.ENDPOINT:
queryset = Endpoint.objects.filter(target_id=target_id).values_list('url', flat=True)
elif source == DataSource.WEBSITE:
queryset = WebSite.objects.filter(target_id=target_id).values_list('url', flat=True)
else:
logger.warning("未知的数据源类型: %s,跳过", source)
continue
for url in queryset.iterator(chunk_size=batch_size):
if url:
has_raw_data = True
if blacklist_filter and not blacklist_filter.is_allowed(url):
continue
has_output = True
yield url, source
# 有原始数据就停止(不管是否被过滤)
if has_raw_data:
if not has_output:
logger.info("%s 有数据但全被黑名单过滤,不回退", source)
return
logger.info("%s 为空,尝试下一个数据源", source)
def get_urls_with_fallback(
target_id: int,
sources: List[str],
batch_size: int = 1000
) -> Dict[str, Any]:
"""
带回退链的 URL 获取用例函数(返回列表)
按 sources 顺序尝试每个数据源,直到有数据返回。
Args:
target_id: 目标 ID
sources: 数据源优先级列表,如 ["website", "endpoint", "default"]
batch_size: 批次大小
Returns:
dict: {
'success': bool,
'urls': List[str],
'total_count': int,
'source': str, # 实际使用的数据源
'tried_sources': List[str], # 尝试过的数据源
}
"""
from apps.common.services import BlacklistService
rules = BlacklistService().get_rules(target_id)
blacklist_filter = BlacklistFilter(rules)
urls = []
actual_source = 'none'
tried_sources = []
for url, source in _iter_urls_with_fallback(target_id, sources, blacklist_filter, batch_size, tried_sources):
urls.append(url)
actual_source = source
if urls:
logger.info("%s 获取 %d 条 URL", actual_source, len(urls))
else:
logger.warning("所有数据源都为空,无法获取 URL")
return {
'success': True,
'urls': urls,
'total_count': len(urls),
'source': actual_source,
'tried_sources': tried_sources,
}
def export_urls_with_fallback(
target_id: int,
output_file: str,
sources: List[str],
batch_size: int = 1000
) -> Dict[str, Any]:
"""
带回退链的 URL 导出用例函数(写入文件)
按 sources 顺序尝试每个数据源,直到有数据返回。
流式写入,内存占用 O(1)。
Args:
target_id: 目标 ID
output_file: 输出文件路径
sources: 数据源优先级列表,如 ["endpoint", "website", "default"]
batch_size: 批次大小
Returns:
dict: {
'success': bool,
'output_file': str,
'total_count': int,
'source': str, # 实际使用的数据源
'tried_sources': List[str], # 尝试过的数据源
}
"""
from apps.common.services import BlacklistService
rules = BlacklistService().get_rules(target_id)
blacklist_filter = BlacklistFilter(rules)
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
total_count = 0
actual_source = 'none'
tried_sources = []
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
for url, source in _iter_urls_with_fallback(target_id, sources, blacklist_filter, batch_size, tried_sources):
f.write(f"{url}\n")
total_count += 1
actual_source = source
if total_count % 10000 == 0:
logger.info("已导出 %d 个 URL...", total_count)
if total_count > 0:
logger.info("%s 导出 %d 条 URL 到 %s", actual_source, total_count, output_file)
else:
logger.warning("所有数据源都为空,无法导出 URL")
return {
'success': True,
'output_file': str(output_path),
'total_count': total_count,
'source': actual_source,
'tried_sources': tried_sources,
}
class TargetExportService:
"""
目标导出服务 - 提供统一的目标提取和文件导出功能
使用方式:
# Task 层决定数据源
queryset = WebSite.objects.filter(target_id=target_id).values_list('url', flat=True)
# 方式 1使用用例函数推荐
from apps.scan.services.target_export_service import export_urls_with_fallback, DataSource
# 使用导出服务
blacklist_service = BlacklistService()
export_service = TargetExportService(blacklist_service=blacklist_service)
result = export_urls_with_fallback(
target_id=1,
output_file='/path/to/output.txt',
sources=[DataSource.ENDPOINT, DataSource.WEBSITE, DataSource.DEFAULT]
)
# 方式 2直接使用 Service纯导出不带回退
export_service = create_export_service(target_id)
result = export_service.export_urls(target_id, output_path, queryset)
"""
def __init__(self, blacklist_service: Optional[BlacklistService] = None):
def __init__(self, blacklist_filter: Optional[BlacklistFilter] = None):
"""
初始化导出服务
Args:
blacklist_service: 黑名单过滤服务None 表示禁用过滤
blacklist_filter: 黑名单过滤None 表示禁用过滤
"""
self.blacklist_service = blacklist_service
self.blacklist_filter = blacklist_filter
def export_urls(
self,
@@ -51,16 +342,14 @@ class TargetExportService:
batch_size: int = 1000
) -> Dict[str, Any]:
"""
统一 URL 导出函数
URL 导出函数 - 只负责将 queryset 数据写入文件
自动判断数据库有无数据:
- 有数据:流式写入数据库数据到文件
- 无数据:调用默认值生成器生成 URL
不做任何隐式回退或默认 URL 生成。
Args:
target_id: 目标 ID
output_path: 输出文件路径
queryset: 数据源 queryset Task 层构建,应为 values_list flat=True
queryset: 数据源 queryset调用方构建,应为 values_list flat=True
url_field: URL 字段名(用于黑名单过滤)
batch_size: 批次大小
@@ -68,7 +357,9 @@ class TargetExportService:
dict: {
'success': bool,
'output_file': str,
'total_count': int
'total_count': int, # 实际写入数量
'queryset_count': int, # 原始数据数量(迭代计数)
'filtered_count': int, # 被黑名单过滤的数量
}
Raises:
@@ -79,19 +370,18 @@ class TargetExportService:
logger.info("开始导出 URL - target_id=%s, output=%s", target_id, output_path)
# 应用黑名单过滤(数据库层面)
if self.blacklist_service:
# 注意queryset 应该是原始 queryset不是 values_list
# 这里假设 Task 层传入的是 values_list需要在 Task 层处理过滤
pass
total_count = 0
filtered_count = 0
queryset_count = 0
try:
with open(output_file, 'w', encoding='utf-8', buffering=8192) as f:
for url in queryset.iterator(chunk_size=batch_size):
queryset_count += 1
if url:
# Python 层面黑名单过滤
if self.blacklist_service and not self.blacklist_service.filter_url(url):
# 黑名单过滤
if self.blacklist_filter and not self.blacklist_filter.is_allowed(url):
filtered_count += 1
continue
f.write(f"{url}\n")
total_count += 1
@@ -102,25 +392,29 @@ class TargetExportService:
logger.error("文件写入失败: %s - %s", output_path, e)
raise
# 默认值回退模式
if total_count == 0:
total_count = self._generate_default_urls(target_id, output_file)
if filtered_count > 0:
logger.info("黑名单过滤: 过滤 %d 个 URL", filtered_count)
logger.info("✓ URL 导出完成 - 数量: %d, 文件: %s", total_count, output_path)
logger.info(
"✓ URL 导出完成 - 写入: %d, 原始: %d, 过滤: %d, 文件: %s",
total_count, queryset_count, filtered_count, output_path
)
return {
'success': True,
'output_file': str(output_file),
'total_count': total_count
'total_count': total_count,
'queryset_count': queryset_count,
'filtered_count': filtered_count,
}
def _generate_default_urls(
def generate_default_urls(
self,
target_id: int,
output_path: Path
) -> int:
output_path: str
) -> Dict[str, Any]:
"""
默认值生成器(内部函数)
默认 URL 生成器
根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
@@ -133,91 +427,43 @@ class TargetExportService:
output_path: 输出文件路径
Returns:
int: 写入的 URL 总数
dict: {
'success': bool,
'output_file': str,
'total_count': int,
}
"""
from apps.targets.services import TargetService
from apps.targets.models import Target
output_file = Path(output_path)
output_file.parent.mkdir(parents=True, exist_ok=True)
target_service = TargetService()
target = target_service.get_target(target_id)
if not target:
logger.warning("Target ID %d 不存在,无法生成默认 URL", target_id)
return 0
target_name = target.name
target_type = target.type
logger.info("懒加载模式Target 类型=%s, 名称=%s", target_type, target_name)
logger.info("生成默认 URL - target_id=%d", target_id)
total_urls = 0
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
if target_type == Target.TargetType.DOMAIN:
urls = [f"http://{target_name}", f"https://{target_name}"]
for url in urls:
if self._should_write_url(url):
f.write(f"{url}\n")
total_urls += 1
elif target_type == Target.TargetType.IP:
urls = [f"http://{target_name}", f"https://{target_name}"]
for url in urls:
if self._should_write_url(url):
f.write(f"{url}\n")
total_urls += 1
elif target_type == Target.TargetType.CIDR:
try:
network = ipaddress.ip_network(target_name, strict=False)
for ip in network.hosts():
urls = [f"http://{ip}", f"https://{ip}"]
for url in urls:
if self._should_write_url(url):
f.write(f"{url}\n")
total_urls += 1
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
# /32 或 /128 特殊处理
if total_urls == 0:
ip = str(network.network_address)
urls = [f"http://{ip}", f"https://{ip}"]
for url in urls:
if self._should_write_url(url):
f.write(f"{url}\n")
total_urls += 1
except ValueError as e:
logger.error("CIDR 解析失败: %s - %s", target_name, e)
raise ValueError(f"无效的 CIDR: {target_name}") from e
elif target_type == Target.TargetType.URL:
if self._should_write_url(target_name):
f.write(f"{target_name}\n")
total_urls = 1
else:
logger.warning("不支持的 Target 类型: %s", target_type)
with open(output_file, 'w', encoding='utf-8', buffering=8192) as f:
for url in _iter_default_urls_from_target(target_id, self.blacklist_filter):
f.write(f"{url}\n")
total_urls += 1
if total_urls % 10000 == 0:
logger.info("已生成 %d 个 URL...", total_urls)
logger.info("懒加载生成默认 URL - 数量: %d", total_urls)
return total_urls
def _should_write_url(self, url: str) -> bool:
"""检查 URL 是否应该写入(通过黑名单过滤)"""
if self.blacklist_service:
return self.blacklist_service.filter_url(url)
return True
logger.info("✓ 默认 URL 生成完成 - 数量: %d", total_urls)
return {
'success': True,
'output_file': str(output_file),
'total_count': total_urls,
}
def export_targets(
def export_hosts(
self,
target_id: int,
output_path: str,
batch_size: int = 1000
) -> Dict[str, Any]:
"""
域名/IP 导出函数(用于端口扫描)
主机列表导出函数(用于端口扫描)
根据 Target 类型选择导出逻辑:
- DOMAIN: 从 Subdomain 表流式导出子域名
@@ -255,7 +501,7 @@ class TargetExportService:
target_name = target.name
logger.info(
"开始导出扫描目标 - Target ID: %d, Name: %s, Type: %s, 输出文件: %s",
"开始导出主机列表 - Target ID: %d, Name: %s, Type: %s, 输出文件: %s",
target_id, target_name, target_type, output_path
)
@@ -277,7 +523,7 @@ class TargetExportService:
raise ValueError(f"不支持的目标类型: {target_type}")
logger.info(
"扫描目标导出完成 - 类型: %s, 总数: %d, 文件: %s",
"主机列表导出完成 - 类型: %s, 总数: %d, 文件: %s",
type_desc, total_count, output_path
)
@@ -295,7 +541,7 @@ class TargetExportService:
output_path: Path,
batch_size: int
) -> int:
"""导出域名类型目标的子域名"""
"""导出域名类型目标的根域名 + 子域名"""
from apps.asset.services.asset.subdomain_service import SubdomainService
subdomain_service = SubdomainService()
@@ -305,23 +551,27 @@ class TargetExportService:
)
total_count = 0
written_domains = set() # 去重(子域名表可能已包含根域名)
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
# 1. 先写入根域名
if self._should_write_target(target_name):
f.write(f"{target_name}\n")
written_domains.add(target_name)
total_count += 1
# 2. 再写入子域名(跳过已写入的根域名)
for domain_name in domain_iterator:
if domain_name in written_domains:
continue
if self._should_write_target(domain_name):
f.write(f"{domain_name}\n")
written_domains.add(domain_name)
total_count += 1
if total_count % 10000 == 0:
logger.info("已导出 %d 个域名...", total_count)
# 默认值模式:如果没有子域名,使用根域名
if total_count == 0:
logger.info("采用默认域名:%s (target_id=%d)", target_name, target_id)
if self._should_write_target(target_name):
with open(output_path, 'w', encoding='utf-8') as f:
f.write(f"{target_name}\n")
total_count = 1
return total_count
def _export_ip(self, target_name: str, output_path: Path) -> int:
@@ -359,6 +609,6 @@ class TargetExportService:
def _should_write_target(self, target: str) -> bool:
"""检查目标是否应该写入(通过黑名单过滤)"""
if self.blacklist_service:
return self.blacklist_service.filter_url(target)
if self.blacklist_filter:
return self.blacklist_filter.is_allowed(target)
return True

View File

@@ -1,14 +1,16 @@
"""
导出站点 URL 到 TXT 文件的 Task
使用 TargetExportService 统一处理导出逻辑和默认值回退
数据源: WebSite.url
使用 export_urls_with_fallback 用例函数处理回退链逻辑
数据源: WebSite.url → Default
"""
import logging
from prefect import task
from apps.asset.models import WebSite
from apps.scan.services import TargetExportService, BlacklistService
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
logger = logging.getLogger(__name__)
@@ -22,13 +24,9 @@ def export_sites_task(
"""
导出目标下的所有站点 URL 到 TXT 文件
数据源: WebSite.url
懒加载模式:
- 如果数据库为空,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 URL
数据源优先级(回退链):
1. WebSite 表 - 站点级别 URL
2. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
target_id: 目标 ID
@@ -46,26 +44,21 @@ def export_sites_task(
ValueError: 参数错误
IOError: 文件写入失败
"""
# 构建数据源 querysetTask 层决定数据源)
queryset = WebSite.objects.filter(target_id=target_id).values_list('url', flat=True)
# 使用 TargetExportService 处理导出
blacklist_service = BlacklistService()
export_service = TargetExportService(blacklist_service=blacklist_service)
result = export_service.export_urls(
result = export_urls_with_fallback(
target_id=target_id,
output_path=output_file,
queryset=queryset,
batch_size=batch_size
output_file=output_file,
sources=[DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"站点 URL 导出完成 - source=%s, count=%d",
result['source'], result['total_count']
)
# 保持返回值格式不变(向后兼容)
return {
'success': result['success'],
'output_file': result['output_file'],
'total_count': result['total_count']
'total_count': result['total_count'],
}

View File

@@ -2,15 +2,17 @@
导出 URL 任务
用于指纹识别前导出目标下的 URL 到文件
使用 TargetExportService 统一处理导出逻辑和默认值回退
使用 export_urls_with_fallback 用例函数处理回退链逻辑
"""
import logging
from prefect import task
from apps.asset.models import WebSite
from apps.scan.services import TargetExportService, BlacklistService
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
logger = logging.getLogger(__name__)
@@ -19,47 +21,40 @@ logger = logging.getLogger(__name__)
def export_urls_for_fingerprint_task(
target_id: int,
output_file: str,
source: str = 'website',
source: str = 'website', # 保留参数,兼容旧调用(实际值由回退链决定)
batch_size: int = 1000
) -> dict:
"""
导出目标下的 URL 到文件(用于指纹识别)
数据源: WebSite.url
懒加载模式:
- 如果数据库为空,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 URL
- URL: 直接使用目标 URL
数据源优先级(回退链):
1. WebSite 表 - 站点级别 URL
2. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
target_id: 目标 ID
output_file: 输出文件路径
source: 数据源类型(保留参数,兼容旧调用)
source: 数据源类型(保留参数,兼容旧调用,实际值由回退链决定
batch_size: 批量读取大小
Returns:
dict: {'output_file': str, 'total_count': int, 'source': str}
"""
# 构建数据源 querysetTask 层决定数据源)
queryset = WebSite.objects.filter(target_id=target_id).values_list('url', flat=True)
# 使用 TargetExportService 处理导出
blacklist_service = BlacklistService()
export_service = TargetExportService(blacklist_service=blacklist_service)
result = export_service.export_urls(
result = export_urls_with_fallback(
target_id=target_id,
output_path=output_file,
queryset=queryset,
batch_size=batch_size
output_file=output_file,
sources=[DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
# 保持返回值格式不变(向后兼容)
logger.info(
"指纹识别 URL 导出完成 - source=%s, count=%d",
result['source'], result['total_count']
)
# 返回实际使用的数据源(不再固定为 "website"
return {
'output_file': result['output_file'],
'total_count': result['total_count'],
'source': source
'source': result['source'],
}

View File

@@ -4,12 +4,12 @@
提供端口扫描流程所需的原子化任务
"""
from .export_scan_targets_task import export_scan_targets_task
from .export_hosts_task import export_hosts_task
from .run_and_stream_save_ports_task import run_and_stream_save_ports_task
from .types import PortScanRecord
__all__ = [
'export_scan_targets_task',
'export_hosts_task',
'run_and_stream_save_ports_task',
'PortScanRecord',
]

View File

@@ -1,7 +1,7 @@
"""
导出扫描目标 TXT 文件的 Task
导出主机列表 TXT 文件的 Task
使用 TargetExportService.export_targets() 统一处理导出逻辑
使用 TargetExportService.export_hosts() 统一处理导出逻辑
根据 Target 类型决定导出内容
- DOMAIN: Subdomain 表导出子域名
@@ -11,19 +11,19 @@
import logging
from prefect import task
from apps.scan.services import TargetExportService, BlacklistService
from apps.scan.services.target_export_service import create_export_service
logger = logging.getLogger(__name__)
@task(name="export_scan_targets")
def export_scan_targets_task(
@task(name="export_hosts")
def export_hosts_task(
target_id: int,
output_file: str,
batch_size: int = 1000
) -> dict:
"""
导出扫描目标 TXT 文件
导出主机列表 TXT 文件
根据 Target 类型自动决定导出内容
- DOMAIN: Subdomain 表导出子域名流式处理支持 10+ 域名
@@ -47,11 +47,10 @@ def export_scan_targets_task(
ValueError: Target 不存在
IOError: 文件写入失败
"""
# 使用 TargetExportService 处理导出
blacklist_service = BlacklistService()
export_service = TargetExportService(blacklist_service=blacklist_service)
# 使用工厂函数创建导出服务
export_service = create_export_service(target_id)
result = export_service.export_targets(
result = export_service.export_hosts(
target_id=target_id,
output_path=output_file,
batch_size=batch_size

View File

@@ -0,0 +1,12 @@
"""
截图任务模块
包含截图相关的所有任务:
- capture_screenshots_task: 批量截图任务
"""
from .capture_screenshots_task import capture_screenshots_task
__all__ = [
'capture_screenshots_task',
]

View File

@@ -0,0 +1,194 @@
"""
批量截图任务
使用 Playwright 批量捕获网站截图,压缩后保存到数据库
"""
import asyncio
import logging
import time
from prefect import task
logger = logging.getLogger(__name__)
def _run_async(coro):
"""
在同步环境中运行异步协程
Args:
coro: 异步协程
Returns:
协程执行结果
"""
try:
loop = asyncio.get_event_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return loop.run_until_complete(coro)
def _save_screenshot_with_retry(
screenshot_service,
scan_id: int,
url: str,
webp_data: bytes,
status_code: int | None = None,
max_retries: int = 3
) -> bool:
"""
保存截图到数据库(带重试机制)
Args:
screenshot_service: ScreenshotService 实例
scan_id: 扫描 ID
url: URL
webp_data: WebP 图片数据
status_code: HTTP 响应状态码
max_retries: 最大重试次数
Returns:
是否保存成功
"""
for attempt in range(max_retries):
try:
if screenshot_service.save_screenshot_snapshot(scan_id, url, webp_data, status_code):
return True
# save 返回 False等待后重试
if attempt < max_retries - 1:
wait_time = 2 ** attempt # 指数退避1s, 2s, 4s
logger.warning(
"保存截图失败(第 %d 次尝试),%d秒后重试: %s",
attempt + 1, wait_time, url
)
time.sleep(wait_time)
except Exception as e:
if attempt < max_retries - 1:
wait_time = 2 ** attempt
logger.warning(
"保存截图异常(第 %d 次尝试),%d秒后重试: %s, 错误: %s",
attempt + 1, wait_time, url, str(e)[:100]
)
time.sleep(wait_time)
else:
logger.error("保存截图失败(已重试 %d 次): %s", max_retries, url)
return False
async def _capture_and_save_screenshots(
urls: list[str],
scan_id: int,
concurrency: int
) -> dict:
"""
异步批量截图并保存
Args:
urls: URL 列表
scan_id: 扫描 ID
concurrency: 并发数
Returns:
统计信息字典
"""
from asgiref.sync import sync_to_async
from apps.asset.services.playwright_screenshot_service import PlaywrightScreenshotService
from apps.asset.services.screenshot_service import ScreenshotService
# 初始化服务
playwright_service = PlaywrightScreenshotService(concurrency=concurrency)
screenshot_service = ScreenshotService()
# 包装同步的保存函数为异步
async_save_with_retry = sync_to_async(_save_screenshot_with_retry, thread_sensitive=True)
# 统计
total = len(urls)
successful = 0
failed = 0
logger.info("开始批量截图 - URL数: %d, 并发数: %d", total, concurrency)
# 批量截图
async for url, screenshot_bytes, status_code in playwright_service.capture_batch(urls):
if screenshot_bytes is None:
failed += 1
continue
# 压缩为 WebP
webp_data = screenshot_service.compress_from_bytes(screenshot_bytes)
if webp_data is None:
logger.warning("压缩截图失败: %s", url)
failed += 1
continue
# 保存到数据库(带重试,使用 sync_to_async
if await async_save_with_retry(screenshot_service, scan_id, url, webp_data, status_code):
successful += 1
if successful % 10 == 0:
logger.info("截图进度: %d/%d 成功", successful, total)
else:
failed += 1
return {
'total': total,
'successful': successful,
'failed': failed
}
@task(name='capture_screenshots', retries=0)
def capture_screenshots_task(
urls: list[str],
scan_id: int,
target_id: int,
config: dict
) -> dict:
"""
批量截图任务
Args:
urls: URL 列表
scan_id: 扫描 ID
target_id: 目标 ID用于日志
config: 截图配置
- concurrency: 并发数(默认 5
Returns:
dict: {
'total': int, # 总 URL 数
'successful': int, # 成功截图数
'failed': int # 失败数
}
"""
if not urls:
logger.info("URL 列表为空,跳过截图任务")
return {'total': 0, 'successful': 0, 'failed': 0}
concurrency = config.get('concurrency', 5)
logger.info(
"开始截图任务 - scan_id=%d, target_id=%d, URL数=%d, 并发=%d",
scan_id, target_id, len(urls), concurrency
)
try:
result = _run_async(_capture_and_save_screenshots(
urls=urls,
scan_id=scan_id,
concurrency=concurrency
))
logger.info(
"✓ 截图任务完成 - 总数: %d, 成功: %d, 失败: %d",
result['total'], result['successful'], result['failed']
)
return result
except Exception as e:
logger.error("截图任务失败: %s", e, exc_info=True)
raise RuntimeError(f"截图任务失败: {e}") from e

View File

@@ -2,7 +2,7 @@
导出站点URL到文件的Task
直接使用 HostPortMapping 表查询 host+port 组合拼接成URL格式写入文件
使用 TargetExportService 处理默认值回退逻辑
使用 TargetExportService.generate_default_urls() 处理默认值回退逻辑
特殊逻辑:
- 80 端口:只生成 HTTP URL省略端口号
@@ -14,7 +14,9 @@ from pathlib import Path
from prefect import task
from apps.asset.services import HostPortMappingService
from apps.scan.services import TargetExportService, BlacklistService
from apps.scan.services.target_export_service import create_export_service
from apps.common.services import BlacklistService
from apps.common.utils import BlacklistFilter
logger = logging.getLogger(__name__)
@@ -44,18 +46,15 @@ def export_site_urls_task(
"""
导出目标下的所有站点URL到文件基于 HostPortMapping 表)
数据源: HostPortMapping (host + port)
数据源: HostPortMapping (host + port) → Default
特殊逻辑:
- 80 端口:只生成 HTTP URL省略端口号
- 443 端口:只生成 HTTPS URL省略端口号
- 其他端口:生成 HTTP 和 HTTPS 两个URL带端口号
懒加载模式
- 如果数据库为空,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 URL
回退逻辑
- 如果 HostPortMapping 为空,使用 generate_default_urls() 生成默认 URL
Args:
target_id: 目标ID
@@ -67,7 +66,8 @@ def export_site_urls_task(
'success': bool,
'output_file': str,
'total_urls': int,
'association_count': int # 主机端口关联数量
'association_count': int, # 主机端口关联数量
'source': str, # 数据来源: "host_port" | "default"
}
Raises:
@@ -80,8 +80,8 @@ def export_site_urls_task(
output_path = Path(output_file)
output_path.parent.mkdir(parents=True, exist_ok=True)
# 初始化黑名单服务
blacklist_service = BlacklistService()
# 获取规则并创建过滤器
blacklist_filter = BlacklistFilter(BlacklistService().get_rules(target_id))
# 直接查询 HostPortMapping 表,按 host 排序
service = HostPortMappingService()
@@ -92,6 +92,7 @@ def export_site_urls_task(
total_urls = 0
association_count = 0
filtered_count = 0
# 流式写入文件(特殊端口逻辑)
with open(output_path, 'w', encoding='utf-8', buffering=8192) as f:
@@ -100,28 +101,53 @@ def export_site_urls_task(
host = assoc['host']
port = assoc['port']
# 先校验 host通过了再生成 URL
if not blacklist_filter.is_allowed(host):
filtered_count += 1
continue
# 根据端口号生成URL
for url in _generate_urls_from_port(host, port):
if blacklist_service.filter_url(url):
f.write(f"{url}\n")
total_urls += 1
f.write(f"{url}\n")
total_urls += 1
if association_count % 1000 == 0:
logger.info("已处理 %d 条关联,生成 %d 个URL...", association_count, total_urls)
if filtered_count > 0:
logger.info("黑名单过滤: 过滤 %d 条关联", filtered_count)
logger.info(
"✓ 站点URL导出完成 - 关联数: %d, 总URL数: %d, 文件: %s",
association_count, total_urls, str(output_path)
)
# 默认值回退模式:使用 TargetExportService
# 判断数据来源
source = "host_port"
# 数据存在但全被过滤,不回退
if association_count > 0 and total_urls == 0:
logger.info("HostPortMapping 有 %d 条数据,但全被黑名单过滤,不回退", association_count)
return {
'success': True,
'output_file': str(output_path),
'total_urls': 0,
'association_count': association_count,
'source': source,
}
# 数据源为空,回退到默认 URL 生成
if total_urls == 0:
export_service = TargetExportService(blacklist_service=blacklist_service)
total_urls = export_service._generate_default_urls(target_id, output_path)
logger.info("HostPortMapping 为空,使用默认 URL 生成")
export_service = create_export_service(target_id)
result = export_service.generate_default_urls(target_id, str(output_path))
total_urls = result['total_count']
source = "default"
return {
'success': True,
'output_file': str(output_path),
'total_urls': total_urls,
'association_count': association_count
'association_count': association_count,
'source': source,
}

View File

@@ -341,11 +341,12 @@ def _save_batch(
)
snapshot_items.append(snapshot_dto)
except Exception as e:
logger.error("处理记录失败: %s,错误: %s", record.url, e)
continue
# ========== Step 3: 保存快照并同步到资产表(通过快照 Service==========
# ========== Step 2: 保存快照并同步到资产表(通过快照 Service==========
if snapshot_items:
services.snapshot.save_and_sync(snapshot_items)

View File

@@ -111,6 +111,7 @@ def save_domains_task(
continue
# 只有通过验证的域名才添加到批次和计数
# 注意:不在此处过滤黑名单,最大化资产发现
batch.append(domain)
total_domains += 1

View File

@@ -1,16 +1,17 @@
"""
导出站点 URL 列表任务
使用 TargetExportService 统一处理导出逻辑和默认值回退
数据源: WebSite.url用于 katana 等爬虫工具)
使用 export_urls_with_fallback 用例函数处理回退链逻辑
数据源: WebSite.url → Default(用于 katana 等爬虫工具)
"""
import logging
from prefect import task
from typing import Optional
from apps.asset.models import WebSite
from apps.scan.services import TargetExportService, BlacklistService
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
logger = logging.getLogger(__name__)
@@ -29,13 +30,9 @@ def export_sites_task(
"""
导出站点 URL 列表到文件(用于 katana 等爬虫工具)
数据源: WebSite.url
懒加载模式:
- 如果数据库为空,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 URL
数据源优先级(回退链):
1. WebSite 表 - 站点级别 URL
2. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
output_file: 输出文件路径
@@ -53,18 +50,16 @@ def export_sites_task(
ValueError: 参数错误
RuntimeError: 执行失败
"""
# 构建数据源 querysetTask 层决定数据源)
queryset = WebSite.objects.filter(target_id=target_id).values_list('url', flat=True)
# 使用 TargetExportService 处理导出
blacklist_service = BlacklistService()
export_service = TargetExportService(blacklist_service=blacklist_service)
result = export_service.export_urls(
result = export_urls_with_fallback(
target_id=target_id,
output_path=output_file,
queryset=queryset,
batch_size=batch_size
output_file=output_file,
sources=[DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"站点 URL 导出完成 - source=%s, count=%d",
result['source'], result['total_count']
)
# 保持返回值格式不变(向后兼容)

View File

@@ -1,16 +1,22 @@
"""导出 Endpoint URL 到文件的 Task
使用 TargetExportService 统一处理导出逻辑和默认值回退
数据源: Endpoint.url
使用 export_urls_with_fallback 用例函数处理回退链逻辑
数据源优先级(回退链):
1. Endpoint.url - 最精细的 URL含路径、参数等
2. WebSite.url - 站点级别 URL
3. 默认生成 - 根据 Target 类型生成 http(s)://target_name
"""
import logging
from typing import Dict, Optional
from typing import Dict
from prefect import task
from apps.asset.models import Endpoint
from apps.scan.services import TargetExportService, BlacklistService
from apps.scan.services.target_export_service import (
export_urls_with_fallback,
DataSource,
)
logger = logging.getLogger(__name__)
@@ -23,13 +29,10 @@ def export_endpoints_task(
) -> Dict[str, object]:
"""导出目标下的所有 Endpoint URL 到文本文件。
数据源: Endpoint.url
懒加载模式:
- 如果数据库为空,根据 Target 类型生成默认 URL
- DOMAIN: http(s)://domain
- IP: http(s)://ip
- CIDR: 展开为所有 IP 的 URL
数据源优先级(回退链):
1. Endpoint 表 - 最精细的 URL含路径、参数等
2. WebSite 表 - 站点级别 URL
3. 默认生成 - 根据 Target 类型生成 http(s)://target_name
Args:
target_id: 目标 ID
@@ -41,25 +44,24 @@ def export_endpoints_task(
"success": bool,
"output_file": str,
"total_count": int,
"source": str, # 数据来源: "endpoint" | "website" | "default" | "none"
}
"""
# 构建数据源 querysetTask 层决定数据源)
queryset = Endpoint.objects.filter(target_id=target_id).values_list('url', flat=True)
# 使用 TargetExportService 处理导出
blacklist_service = BlacklistService()
export_service = TargetExportService(blacklist_service=blacklist_service)
result = export_service.export_urls(
result = export_urls_with_fallback(
target_id=target_id,
output_path=output_file,
queryset=queryset,
batch_size=batch_size
output_file=output_file,
sources=[DataSource.ENDPOINT, DataSource.WEBSITE, DataSource.DEFAULT],
batch_size=batch_size,
)
logger.info(
"URL 导出完成 - source=%s, count=%d, tried=%s",
result['source'], result['total_count'], result['tried_sources']
)
# 保持返回值格式不变(向后兼容)
return {
"success": result['success'],
"output_file": result['output_file'],
"total_count": result['total_count'],
"source": result['source'],
}

View File

@@ -1,10 +1,11 @@
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import ScanViewSet, ScheduledScanViewSet
from .views import ScanViewSet, ScheduledScanViewSet, ScanLogListView, SubfinderProviderSettingsView
from .notifications.views import notification_callback
from apps.asset.views import (
SubdomainSnapshotViewSet, WebsiteSnapshotViewSet, DirectorySnapshotViewSet,
EndpointSnapshotViewSet, HostPortMappingSnapshotViewSet, VulnerabilitySnapshotViewSet
EndpointSnapshotViewSet, HostPortMappingSnapshotViewSet, VulnerabilitySnapshotViewSet,
ScreenshotSnapshotViewSet
)
# 创建路由器
@@ -26,11 +27,17 @@ scan_endpoints_export = EndpointSnapshotViewSet.as_view({'get': 'export'})
scan_ip_addresses_list = HostPortMappingSnapshotViewSet.as_view({'get': 'list'})
scan_ip_addresses_export = HostPortMappingSnapshotViewSet.as_view({'get': 'export'})
scan_vulnerabilities_list = VulnerabilitySnapshotViewSet.as_view({'get': 'list'})
scan_screenshots_list = ScreenshotSnapshotViewSet.as_view({'get': 'list'})
scan_screenshots_image = ScreenshotSnapshotViewSet.as_view({'get': 'image'})
urlpatterns = [
path('', include(router.urls)),
# Worker 回调 API
path('callbacks/notification/', notification_callback, name='notification-callback'),
# API Key 配置
path('settings/api-keys/', SubfinderProviderSettingsView.as_view(), name='subfinder-provider-settings'),
# 扫描日志 API
path('scans/<int:scan_id>/logs/', ScanLogListView.as_view(), name='scan-logs-list'),
# 嵌套路由:/api/scans/{scan_pk}/xxx/
path('scans/<int:scan_pk>/subdomains/', scan_subdomains_list, name='scan-subdomains-list'),
path('scans/<int:scan_pk>/subdomains/export/', scan_subdomains_export, name='scan-subdomains-export'),
@@ -43,5 +50,7 @@ urlpatterns = [
path('scans/<int:scan_pk>/ip-addresses/', scan_ip_addresses_list, name='scan-ip-addresses-list'),
path('scans/<int:scan_pk>/ip-addresses/export/', scan_ip_addresses_export, name='scan-ip-addresses-export'),
path('scans/<int:scan_pk>/vulnerabilities/', scan_vulnerabilities_list, name='scan-vulnerabilities-list'),
path('scans/<int:scan_pk>/screenshots/', scan_screenshots_list, name='scan-screenshots-list'),
path('scans/<int:scan_pk>/screenshots/<int:pk>/image/', scan_screenshots_image, name='scan-screenshots-image'),
]

View File

@@ -11,6 +11,7 @@ from .wordlist_helpers import ensure_wordlist_local
from .nuclei_helpers import ensure_nuclei_templates_local
from .performance import FlowPerformanceTracker, CommandPerformanceTracker
from .workspace_utils import setup_scan_workspace, setup_scan_directory
from .user_logger import user_log
from . import config_parser
__all__ = [
@@ -31,6 +32,8 @@ __all__ = [
# 性能监控
'FlowPerformanceTracker', # Flow 性能追踪器(含系统资源采样)
'CommandPerformanceTracker', # 命令性能追踪器
# 扫描日志
'user_log', # 用户可见扫描日志记录
# 配置解析
'config_parser',
]

View File

@@ -48,7 +48,7 @@ ENABLE_COMMAND_LOGGING = getattr(settings, 'ENABLE_COMMAND_LOGGING', True)
# 动态并发控制阈值(可在 Django settings 中覆盖)
SCAN_CPU_HIGH = getattr(settings, 'SCAN_CPU_HIGH', 90.0) # CPU 高水位(百分比)
SCAN_MEM_HIGH = getattr(settings, 'SCAN_MEM_HIGH', 80.0) # 内存高水位(百分比)
SCAN_LOAD_CHECK_INTERVAL = getattr(settings, 'SCAN_LOAD_CHECK_INTERVAL', 30) # 负载检查间隔(秒)
SCAN_LOAD_CHECK_INTERVAL = getattr(settings, 'SCAN_LOAD_CHECK_INTERVAL', 180) # 负载检查间隔(秒)
SCAN_COMMAND_STARTUP_DELAY = getattr(settings, 'SCAN_COMMAND_STARTUP_DELAY', 5) # 命令启动前等待(秒)
_ACTIVE_COMMANDS = 0
@@ -74,7 +74,7 @@ def _wait_for_system_load() -> None:
return
logger.info(
"系统负载较高,暂缓启动: cpu=%.1f%% (阈值 %.1f%%), mem=%.1f%% (阈值 %.1f%%)",
"系统负载较高,任务将排队执行防止oom: cpu=%.1f%% (阈值 %.1f%%), mem=%.1f%% (阈值 %.1f%%)",
cpu,
SCAN_CPU_HIGH,
mem,

View File

@@ -0,0 +1,56 @@
"""
扫描日志记录器
提供统一的日志记录接口,用于在 Flow 中记录用户可见的扫描进度日志。
特性:
- 简单的函数式 API
- 只写入数据库ScanLog 表),不写 Python logging
- 错误容忍(数据库失败不影响扫描执行)
职责分离:
- user_log: 用户可见日志(写数据库,前端展示)
- logger: 开发者日志(写日志文件/控制台,调试用)
使用示例:
from apps.scan.utils import user_log
# 用户日志(写数据库)
user_log(scan_id, "port_scan", "Starting port scan")
user_log(scan_id, "port_scan", "naabu completed: found 120 ports")
# 开发者日志(写日志文件)
logger.info("✓ 工具 %s 执行完成 - 记录数: %d", tool_name, count)
"""
import logging
from django.db import DatabaseError
logger = logging.getLogger(__name__)
def user_log(scan_id: int, stage: str, message: str, level: str = "info"):
"""
记录用户可见的扫描日志(只写数据库)
Args:
scan_id: 扫描任务 ID
stage: 阶段名称,如 "port_scan", "site_scan"
message: 日志消息
level: 日志级别,默认 "info",可选 "warning", "error"
数据库 content 格式: "[{stage}] {message}"
"""
formatted = f"[{stage}] {message}"
try:
from apps.scan.models import ScanLog
ScanLog.objects.create(
scan_id=scan_id,
level=level,
content=formatted
)
except DatabaseError as e:
logger.error("ScanLog write failed - scan_id=%s, error=%s", scan_id, e)
except Exception as e:
logger.error("ScanLog write unexpected error - scan_id=%s, error=%s", scan_id, e)

View File

@@ -2,8 +2,12 @@
from .scan_views import ScanViewSet
from .scheduled_scan_views import ScheduledScanViewSet
from .scan_log_views import ScanLogListView
from .subfinder_provider_settings_views import SubfinderProviderSettingsView
__all__ = [
'ScanViewSet',
'ScheduledScanViewSet',
'ScanLogListView',
'SubfinderProviderSettingsView',
]

View File

@@ -0,0 +1,56 @@
"""
扫描日志 API
提供扫描日志查询接口,支持游标分页用于增量轮询。
"""
from rest_framework.views import APIView
from rest_framework.response import Response
from apps.scan.models import ScanLog
from apps.scan.serializers import ScanLogSerializer
class ScanLogListView(APIView):
"""
GET /scans/{scan_id}/logs/
游标分页 API用于增量查询日志
查询参数:
- afterId: 只返回此 ID 之后的日志(用于增量轮询,避免时间戳重复导致的重复日志)
- limit: 返回数量限制(默认 200最大 1000
返回:
- results: 日志列表
- hasMore: 是否还有更多日志
"""
def get(self, request, scan_id: int):
# 参数解析
after_id = request.query_params.get('afterId')
try:
limit = min(int(request.query_params.get('limit', 200)), 1000)
except (ValueError, TypeError):
limit = 200
# 查询日志(按 ID 排序ID 是自增的,保证顺序一致)
queryset = ScanLog.objects.filter(scan_id=scan_id).order_by('id')
# 游标过滤(使用 ID 而非时间戳,避免同一时间戳多条日志导致重复)
if after_id:
try:
queryset = queryset.filter(id__gt=int(after_id))
except (ValueError, TypeError):
pass
# 限制返回数量(多取一条用于判断 hasMore
logs = list(queryset[:limit + 1])
has_more = len(logs) > limit
if has_more:
logs = logs[:limit]
return Response({
'results': ScanLogSerializer(logs, many=True).data,
'hasMore': has_more,
})

View File

@@ -3,6 +3,7 @@ from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.exceptions import NotFound, APIException
from rest_framework.filters import SearchFilter
from django_filters.rest_framework import DjangoFilterBackend
from django.core.exceptions import ObjectDoesNotExist, ValidationError
from django.db.utils import DatabaseError, IntegrityError, OperationalError
import logging
@@ -16,7 +17,7 @@ logger = logging.getLogger(__name__)
from ..models import Scan, ScheduledScan
from ..serializers import (
ScanSerializer, ScanHistorySerializer, QuickScanSerializer,
ScheduledScanSerializer, CreateScheduledScanSerializer,
InitiateScanSerializer, ScheduledScanSerializer, CreateScheduledScanSerializer,
UpdateScheduledScanSerializer, ToggleScheduledScanSerializer
)
from ..services.scan_service import ScanService
@@ -33,7 +34,8 @@ class ScanViewSet(viewsets.ModelViewSet):
"""扫描任务视图集"""
serializer_class = ScanSerializer
pagination_class = BasePagination
filter_backends = [SearchFilter]
filter_backends = [DjangoFilterBackend, SearchFilter]
filterset_fields = ['target'] # 支持 ?target=123 过滤
search_fields = ['target__name'] # 按目标名称搜索
def get_queryset(self):
@@ -111,7 +113,7 @@ class ScanViewSet(viewsets.ModelViewSet):
快速扫描接口
功能:
1. 接收目标列表和引擎配置
1. 接收目标列表和 YAML 配置
2. 自动解析输入(支持 URL、域名、IP、CIDR
3. 批量创建 Target、Website、Endpoint 资产
4. 立即发起批量扫描
@@ -119,7 +121,9 @@ class ScanViewSet(viewsets.ModelViewSet):
请求参数:
{
"targets": [{"name": "example.com"}, {"name": "https://example.com/api"}],
"engine_ids": [1, 2]
"configuration": "subdomain_discovery:\n enabled: true\n ...",
"engine_ids": [1, 2], // 可选,用于记录
"engine_names": ["引擎A", "引擎B"] // 可选,用于记录
}
支持的输入格式:
@@ -134,7 +138,9 @@ class ScanViewSet(viewsets.ModelViewSet):
serializer.is_valid(raise_exception=True)
targets_data = serializer.validated_data['targets']
engine_ids = serializer.validated_data.get('engine_ids')
configuration = serializer.validated_data['configuration']
engine_ids = serializer.validated_data.get('engine_ids', [])
engine_names = serializer.validated_data.get('engine_names', [])
try:
# 提取输入字符串列表
@@ -154,19 +160,13 @@ class ScanViewSet(viewsets.ModelViewSet):
status_code=status.HTTP_400_BAD_REQUEST
)
# 2. 准备多引擎扫描
# 2. 直接使用前端传递的配置创建扫描
scan_service = ScanService()
_, merged_configuration, engine_names, engine_ids = scan_service.prepare_initiate_scan_multi_engine(
target_id=targets[0].id, # 使用第一个目标来验证引擎
engine_ids=engine_ids
)
# 3. 批量发起扫描
created_scans = scan_service.create_scans(
targets=targets,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration
yaml_configuration=configuration
)
# 检查是否成功创建扫描任务
@@ -195,17 +195,6 @@ class ScanViewSet(viewsets.ModelViewSet):
},
status_code=status.HTTP_201_CREATED
)
except ConfigConflictError as e:
return error_response(
code='CONFIG_CONFLICT',
message=str(e),
details=[
{'key': k, 'engines': [e1, e2]}
for k, e1, e2 in e.conflicts
],
status_code=status.HTTP_400_BAD_REQUEST
)
except ValidationError as e:
return error_response(
@@ -228,48 +217,53 @@ class ScanViewSet(viewsets.ModelViewSet):
请求参数:
- organization_id: 组织ID (int, 可选)
- target_id: 目标ID (int, 可选)
- configuration: YAML 配置字符串 (str, 必填)
- engine_ids: 扫描引擎ID列表 (list[int], 必填)
- engine_names: 引擎名称列表 (list[str], 必填)
注意: organization_id 和 target_id 二选一
返回:
- 扫描任务详情(单个或多个)
"""
# 获取请求数据
organization_id = request.data.get('organization_id')
target_id = request.data.get('target_id')
engine_ids = request.data.get('engine_ids')
# 使用 serializer 验证请求数据
serializer = InitiateScanSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
# 验证 engine_ids
if not engine_ids:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='缺少必填参数: engine_ids',
status_code=status.HTTP_400_BAD_REQUEST
)
if not isinstance(engine_ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='engine_ids 必须是数组',
status_code=status.HTTP_400_BAD_REQUEST
)
# 获取验证后的数据
organization_id = serializer.validated_data.get('organization_id')
target_id = serializer.validated_data.get('target_id')
configuration = serializer.validated_data['configuration']
engine_ids = serializer.validated_data['engine_ids']
engine_names = serializer.validated_data['engine_names']
try:
# 步骤1准备多引擎扫描所需的数据
# 获取目标列表
scan_service = ScanService()
targets, merged_configuration, engine_names, engine_ids = scan_service.prepare_initiate_scan_multi_engine(
organization_id=organization_id,
target_id=target_id,
engine_ids=engine_ids
)
# 步骤2批量创建扫描记录并分发扫描任务
if organization_id:
from apps.targets.repositories import DjangoOrganizationRepository
org_repo = DjangoOrganizationRepository()
organization = org_repo.get_by_id(organization_id)
if not organization:
raise ObjectDoesNotExist(f'Organization ID {organization_id} 不存在')
targets = org_repo.get_targets(organization_id)
if not targets:
raise ValidationError(f'组织 ID {organization_id} 下没有目标')
else:
from apps.targets.repositories import DjangoTargetRepository
target_repo = DjangoTargetRepository()
target = target_repo.get_by_id(target_id)
if not target:
raise ObjectDoesNotExist(f'Target ID {target_id} 不存在')
targets = [target]
# 直接使用前端传递的配置创建扫描
created_scans = scan_service.create_scans(
targets=targets,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration
yaml_configuration=configuration
)
# 检查是否成功创建扫描任务
@@ -290,17 +284,6 @@ class ScanViewSet(viewsets.ModelViewSet):
},
status_code=status.HTTP_201_CREATED
)
except ConfigConflictError as e:
return error_response(
code='CONFIG_CONFLICT',
message=str(e),
details=[
{'key': k, 'engines': [e1, e2]}
for k, e1, e2 in e.conflicts
],
status_code=status.HTTP_400_BAD_REQUEST
)
except ObjectDoesNotExist as e:
# 资源不存在错误(由 service 层抛出)

View File

@@ -37,6 +37,11 @@ class ScheduledScanViewSet(viewsets.ModelViewSet):
- PUT /scheduled-scans/{id}/ 更新定时扫描
- DELETE /scheduled-scans/{id}/ 删除定时扫描
- POST /scheduled-scans/{id}/toggle/ 切换启用状态
查询参数:
- target_id: 按目标 ID 过滤
- organization_id: 按组织 ID 过滤
- search: 按名称搜索
"""
queryset = ScheduledScan.objects.all().order_by('-created_at')
@@ -49,6 +54,19 @@ class ScheduledScanViewSet(viewsets.ModelViewSet):
super().__init__(*args, **kwargs)
self.service = ScheduledScanService()
def get_queryset(self):
"""支持按 target_id 和 organization_id 过滤"""
queryset = super().get_queryset()
target_id = self.request.query_params.get('target_id')
organization_id = self.request.query_params.get('organization_id')
if target_id:
queryset = queryset.filter(target_id=target_id)
if organization_id:
queryset = queryset.filter(organization_id=organization_id)
return queryset
def get_serializer_class(self):
"""根据 action 返回不同的序列化器"""
if self.action == 'create':
@@ -68,30 +86,22 @@ class ScheduledScanViewSet(viewsets.ModelViewSet):
data = serializer.validated_data
dto = ScheduledScanDTO(
name=data['name'],
engine_ids=data['engine_ids'],
engine_ids=data.get('engine_ids', []),
engine_names=data.get('engine_names', []),
yaml_configuration=data['configuration'],
organization_id=data.get('organization_id'),
target_id=data.get('target_id'),
cron_expression=data.get('cron_expression', '0 2 * * *'),
is_enabled=data.get('is_enabled', True),
)
scheduled_scan = self.service.create(dto)
scheduled_scan = self.service.create_with_configuration(dto)
response_serializer = ScheduledScanSerializer(scheduled_scan)
return success_response(
data=response_serializer.data,
status_code=status.HTTP_201_CREATED
)
except ConfigConflictError as e:
return error_response(
code='CONFIG_CONFLICT',
message=str(e),
details=[
{'key': k, 'engines': [e1, e2]}
for k, e1, e2 in e.conflicts
],
status_code=status.HTTP_400_BAD_REQUEST
)
except ValidationError as e:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,

View File

@@ -0,0 +1,38 @@
"""Subfinder Provider 配置视图"""
import logging
from rest_framework import status
from rest_framework.views import APIView
from rest_framework.response import Response
from ..models import SubfinderProviderSettings
from ..serializers import SubfinderProviderSettingsSerializer
logger = logging.getLogger(__name__)
class SubfinderProviderSettingsView(APIView):
"""Subfinder Provider 配置视图
GET /api/settings/api-keys/ - 获取配置
PUT /api/settings/api-keys/ - 更新配置
"""
def get(self, request):
"""获取 Subfinder Provider 配置"""
settings = SubfinderProviderSettings.get_instance()
serializer = SubfinderProviderSettingsSerializer(settings.providers)
return Response(serializer.data)
def put(self, request):
"""更新 Subfinder Provider 配置"""
serializer = SubfinderProviderSettingsSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
settings = SubfinderProviderSettings.get_instance()
settings.providers.update(serializer.validated_data)
settings.save()
logger.info("Subfinder Provider 配置已更新")
return Response(SubfinderProviderSettingsSerializer(settings.providers).data)

View File

@@ -1,4 +1,4 @@
# Generated by Django 5.2.7 on 2026-01-02 04:45
# Generated by Django 5.2.7 on 2026-01-06 00:55
from django.db import migrations, models

Some files were not shown because too many files have changed in this diff Show More