Compare commits

..

20 Commits

Author SHA1 Message Date
yyhuni
7cd4354d8f feat(scan,asset): add scan logging system and improve search view architecture
- Add user_logger utility for structured scan operation logging
- Create scan log views and API endpoints for retrieving scan execution logs
- Add scan-log-list component and use-scan-logs hook for frontend log display
- Refactor asset search views to remove ArrayField support from pg_ivm IMMV
- Update search_service.py to JOIN original tables for array field retrieval
- Add system architecture requirements (AMD64/ARM64) to README
- Update scan flow handlers to integrate logging system
- Enhance scan progress dialog with log viewer integration
- Add ANSI log viewer component for formatted log display
- Update scan service API to support log retrieval endpoints
- Migrate database schema to support new logging infrastructure
- Add internationalization strings for scan logs (en/zh)
This change improves observability of scan operations and resolves pg_ivm limitations with ArrayField types by fetching array data from original tables via JOIN operations.
2026-01-04 18:19:45 +08:00
yyhuni
6bf35a760f chore(docker): configure Prefect home directory in worker image
- Add PREFECT_HOME environment variable pointing to /app/.prefect
- Create Prefect configuration directory to prevent home directory warnings
- Update step numbering in Dockerfile comments for clarity
- Ensures Prefect can properly initialize configuration without relying on user home directory
2026-01-04 10:39:11 +08:00
github-actions[bot]
be9ecadffb chore: bump version to v1.3.12-dev 2026-01-04 01:05:00 +00:00
yyhuni
adb53c9f85 feat(asset,scan): add configurable statement timeout and improve CSV export
- Add statement_timeout_ms parameter to search_service count() and stream_search() methods for long-running exports
- Replace server-side cursors with OFFSET/LIMIT batching for better Django compatibility
- Introduce create_csv_export_response() utility function to standardize CSV export handling
- Add engine-preset-selector and scan-config-editor components for enhanced scan configuration UI
- Update YAML editor component with improved styling and functionality
- Add i18n translations for new scan configuration features in English and Chinese
- Refactor CSV export endpoints to use new utility function instead of manual StreamingHttpResponse
- Remove unused uuid import from search_service.py
- Update nginx configuration for improved performance
- Enhance search service with configurable timeout support for large dataset exports
2026-01-04 08:58:31 +08:00
github-actions[bot]
8dd3f0536e chore: bump version to v1.3.11-dev 2026-01-03 11:54:31 +00:00
yyhuni
8a8062a12d refactor(scan): rename merged_configuration to yaml_configuration
- Rename `merged_configuration` field to `yaml_configuration` in Scan and ScheduledScan models for clarity
- Update all references across scan repositories, services, views, and serializers
- Update database migration to reflect field name change with improved help text
- Update frontend components to use new field naming convention
- Add YAML editor component for improved configuration editing in UI
- Update engine configuration retrieval in initiate_scan_flow to use new field name
- Remove unused asset tasks __init__.py module
- Simplify README feedback section for better clarity
- Update frontend type definitions and internationalization messages for consistency
2026-01-03 19:50:20 +08:00
yyhuni
55908a2da5 fix(asset,scan): improve decorator usage and dialog layout
- Fix transaction.non_atomic_requests decorator usage in AssetSearchExportView by wrapping with method_decorator for proper class-based view compatibility
- Update scan progress dialog to use flexible width (sm:max-w-fit sm:min-w-[450px]) instead of fixed width for better responsiveness
- Refactor engine names display from single Badge to grid layout with multiple badges for improved readability when multiple engines are present
- Add proper spacing and alignment adjustments (gap-4, items-start) to accommodate multi-line engine badge display
- Add text-xs and whitespace-nowrap to engine badges for consistent styling in grid layout
2026-01-03 18:46:44 +08:00
github-actions[bot]
22a7d4f091 chore: bump version to v1.3.10-dev 2026-01-03 10:45:32 +00:00
yyhuni
f287f18134 更新锁定镜像 2026-01-03 18:33:25 +08:00
yyhuni
de27230b7a 更新构建ci 2026-01-03 18:28:57 +08:00
github-actions[bot]
15a6295189 chore: bump version to v1.3.8-dev 2026-01-03 10:24:17 +00:00
yyhuni
674acdac66 refactor(asset): move database extension initialization to migrations
- Remove pg_trgm and pg_ivm extension setup from AssetConfig.ready() method
- Move extension creation to migration 0002 using RunSQL operations
- Add pg_trgm extension creation for text search index support
- Add pg_ivm extension creation for IMMV incremental maintenance
- Generate unique cursor names in search_service to prevent concurrent request conflicts
- Add @transaction.non_atomic_requests decorator to export view for server-side cursor compatibility
- Simplify app initialization by delegating extension setup to database migrations
- Improve thread safety and concurrency handling for streaming exports
2026-01-03 18:20:27 +08:00
github-actions[bot]
c59152bedf chore: bump version to v1.3.7-dev 2026-01-03 09:56:39 +00:00
yyhuni
b4037202dc feat: use registry cache for faster builds 2026-01-03 17:35:54 +08:00
yyhuni
4b4f9862bf ci(docker): add postgres image build configuration and update image tags
- Add xingrin-postgres image build job to docker-build workflow for multi-platform support (linux/amd64,linux/arm64)
- Update docker-compose.dev.yml to use IMAGE_TAG variable with dev as default fallback
- Update docker-compose.yml to use IMAGE_TAG variable with required validation
- Replace hardcoded postgres image tag (15) with dynamic IMAGE_TAG for better version management
- Enable flexible image tagging across development and production environments
2026-01-03 17:26:34 +08:00
github-actions[bot]
1c42e4978f chore: bump version to v1.3.5-dev 2026-01-03 08:44:06 +00:00
github-actions[bot]
57bab63997 chore: bump version to v1.3.3-dev 2026-01-03 05:55:07 +00:00
github-actions[bot]
b1f0f18ac0 chore: bump version to v1.3.4-dev 2026-01-03 05:54:50 +00:00
yyhuni
ccee5471b8 docs(readme): add notification push service documentation
- Add notification push service feature to visualization interface section
- Document support for real-time WeChat Work, Telegram, and Discord message push
- Enhance feature list clarity for notification capabilities
2026-01-03 13:34:36 +08:00
yyhuni
0ccd362535 优化下载逻辑 2026-01-03 13:32:58 +08:00
62 changed files with 2834 additions and 1081 deletions

View File

@@ -19,7 +19,8 @@ permissions:
contents: write
jobs:
build:
# AMD64 构建(原生 x64 runner
build-amd64:
runs-on: ubuntu-latest
strategy:
matrix:
@@ -27,39 +28,30 @@ jobs:
- image: xingrin-server
dockerfile: docker/server/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-frontend
dockerfile: docker/frontend/Dockerfile
context: .
platforms: linux/amd64 # ARM64 构建时 Next.js 在 QEMU 下会崩溃
- image: xingrin-worker
dockerfile: docker/worker/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-nginx
dockerfile: docker/nginx/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-agent
dockerfile: docker/agent/Dockerfile
context: .
platforms: linux/amd64,linux/arm64
- image: xingrin-postgres
dockerfile: docker/postgres/Dockerfile
context: docker/postgres
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Free disk space (for large builds like worker)
- name: Free disk space
run: |
echo "=== Before cleanup ==="
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/share/dotnet /usr/local/lib/android /opt/ghc /opt/hostedtoolcache/CodeQL
sudo docker image prune -af
echo "=== After cleanup ==="
df -h
- name: Generate SSL certificates for nginx build
if: matrix.image == 'xingrin-nginx'
@@ -69,10 +61,6 @@ jobs:
-keyout docker/nginx/ssl/privkey.pem \
-out docker/nginx/ssl/fullchain.pem \
-subj "/CN=localhost"
echo "SSL certificates generated for CI build"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -83,7 +71,120 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version from git tag
- name: Get version
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
fi
- name: Build and push AMD64
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: linux/amd64
push: true
tags: ${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}-amd64
build-args: IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-amd64
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-amd64,mode=max
provenance: false
sbom: false
# ARM64 构建(原生 ARM64 runner
build-arm64:
runs-on: ubuntu-22.04-arm
strategy:
matrix:
include:
- image: xingrin-server
dockerfile: docker/server/Dockerfile
context: .
- image: xingrin-frontend
dockerfile: docker/frontend/Dockerfile
context: .
- image: xingrin-worker
dockerfile: docker/worker/Dockerfile
context: .
- image: xingrin-nginx
dockerfile: docker/nginx/Dockerfile
context: .
- image: xingrin-agent
dockerfile: docker/agent/Dockerfile
context: .
- image: xingrin-postgres
dockerfile: docker/postgres/Dockerfile
context: docker/postgres
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Generate SSL certificates for nginx build
if: matrix.image == 'xingrin-nginx'
run: |
mkdir -p docker/nginx/ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout docker/nginx/ssl/privkey.pem \
-out docker/nginx/ssl/fullchain.pem \
-subj "/CN=localhost"
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
else
echo "VERSION=dev-$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
fi
- name: Build and push ARM64
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: linux/arm64
push: true
tags: ${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}-arm64
build-args: IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-arm64
cache-to: type=registry,ref=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:cache-arm64,mode=max
provenance: false
sbom: false
# 合并多架构 manifest
merge-manifests:
runs-on: ubuntu-latest
needs: [build-amd64, build-arm64]
strategy:
matrix:
image:
- xingrin-server
- xingrin-frontend
- xingrin-worker
- xingrin-nginx
- xingrin-agent
- xingrin-postgres
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Get version
id: version
run: |
if [[ $GITHUB_REF == refs/tags/* ]]; then
@@ -94,28 +195,27 @@ jobs:
echo "IS_RELEASE=false" >> $GITHUB_OUTPUT
fi
- name: Build and push
uses: docker/build-push-action@v5
with:
context: ${{ matrix.context }}
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: true
tags: |
${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}
${{ steps.version.outputs.IS_RELEASE == 'true' && format('{0}/{1}:latest', env.IMAGE_PREFIX, matrix.image) || '' }}
build-args: |
IMAGE_TAG=${{ steps.version.outputs.VERSION }}
cache-from: type=gha,scope=${{ matrix.image }}
cache-to: type=gha,mode=max,scope=${{ matrix.image }}
provenance: false
sbom: false
- name: Create and push multi-arch manifest
run: |
VERSION=${{ steps.version.outputs.VERSION }}
IMAGE=${{ env.IMAGE_PREFIX }}/${{ matrix.image }}
docker manifest create ${IMAGE}:${VERSION} \
${IMAGE}:${VERSION}-amd64 \
${IMAGE}:${VERSION}-arm64
docker manifest push ${IMAGE}:${VERSION}
if [[ "${{ steps.version.outputs.IS_RELEASE }}" == "true" ]]; then
docker manifest create ${IMAGE}:latest \
${IMAGE}:${VERSION}-amd64 \
${IMAGE}:${VERSION}-arm64
docker manifest push ${IMAGE}:latest
fi
# 所有镜像构建成功后,更新 VERSION 文件
# 根据 tag 所在的分支更新对应分支的 VERSION 文件
# 更新 VERSION 文件
update-version:
runs-on: ubuntu-latest
needs: build
needs: merge-manifests
if: startsWith(github.ref, 'refs/tags/v')
steps:
- name: Checkout repository

View File

@@ -13,14 +13,14 @@
<p align="center">
<a href="#-功能特性">功能特性</a> •
<a href="#-全局资产搜索">资产搜索</a> •
<a href="#-快速开始">快速开始</a> •
<a href="#-文档">文档</a> •
<a href="#-技术栈">技术栈</a> •
<a href="#-反馈与贡献">反馈与贡献</a>
</p>
<p align="center">
<sub>🔍 关键词: ASM | 攻击面管理 | 漏洞扫描 | 资产发现 | Bug Bounty | 渗透测试 | Nuclei | 子域名枚举 | EASM</sub>
<sub>🔍 关键词: ASM | 攻击面管理 | 漏洞扫描 | 资产发现 | 资产搜索 | Bug Bounty | 渗透测试 | Nuclei | 子域名枚举 | EASM</sub>
</p>
---
@@ -162,9 +162,34 @@ flowchart TB
W3 -.心跳上报.-> REDIS
```
### 🔎 全局资产搜索
- **多类型搜索** - 支持 Website 和 Endpoint 两种资产类型
- **表达式语法** - 支持 `=`(模糊)、`==`(精确)、`!=`(不等于)操作符
- **逻辑组合** - 支持 `&&` (AND) 和 `||` (OR) 逻辑组合
- **多字段查询** - 支持 host、url、title、tech、status、body、header 字段
- **CSV 导出** - 流式导出全部搜索结果,无数量限制
#### 搜索语法示例
```bash
# 基础搜索
host="api" # host 包含 "api"
status=="200" # 状态码精确等于 200
tech="nginx" # 技术栈包含 nginx
# 组合搜索
host="api" && status=="200" # host 包含 api 且状态码为 200
tech="vue" || tech="react" # 技术栈包含 vue 或 react
# 复杂查询
host="admin" && tech="php" && status=="200"
url="/api/v1" && status!="404"
```
### 📊 可视化界面
- **数据统计** - 资产/漏洞统计仪表盘
- **实时通知** - WebSocket 消息推送
- **通知推送** - 实时企业微信tgdiscard消息推送服务
---
@@ -172,7 +197,8 @@ flowchart TB
### 环境要求
- **操作系统**: Ubuntu 20.04+ / Debian 11+ (推荐)
- **操作系统**: Ubuntu 20.04+ / Debian 11+
- **系统架构**: AMD64 (x86_64) / ARM64 (aarch64)
- **硬件**: 2核 4G 内存起步20GB+ 磁盘空间
### 一键安装
@@ -197,6 +223,7 @@ sudo ./install.sh --mirror
### 访问服务
- **Web 界面**: `https://ip:8083`
- **默认账号**: admin / admin首次登录后请修改密码
### 常用命令
@@ -216,13 +243,9 @@ sudo ./uninstall.sh
## 🤝 反馈与贡献
- 🐛 **如果发现 Bug** 可以点击右边链接进行提交 [Issue](https://github.com/yyhuni/xingrin/issues)
- 💡 **有新想法比如UI设计功能设计等** 欢迎点击右边链接进行提交建议 [Issue](https://github.com/yyhuni/xingrin/issues)
- 💡 **发现 Bug有新想法比如UI设计功能设计等** 欢迎点击右边链接进行提交建议 [Issue](https://github.com/yyhuni/xingrin/issues) 或者公众号私信
## 📧 联系
- 目前版本就我个人使用,可能会有很多边界问题
- 如有问题,建议,其他,优先提交[Issue](https://github.com/yyhuni/xingrin/issues),也可以直接给我的公众号发消息,我都会回复的
- 微信公众号: **塔罗安全学苑**
<img src="docs/wechat-qrcode.png" alt="微信公众号" width="200">

View File

@@ -1 +1 @@
v1.3.2-dev
v1.3.12-dev

View File

@@ -1,106 +1,6 @@
import logging
import sys
from django.apps import AppConfig
logger = logging.getLogger(__name__)
class AssetConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'apps.asset'
def ready(self):
# 导入所有模型以确保Django发现并注册
from . import models
# 启用 pg_trgm 扩展(用于文本模糊搜索索引)
# 用于已有数据库升级场景
self._ensure_pg_trgm_extension()
# 验证 pg_ivm 扩展是否可用(用于 IMMV 增量维护)
self._verify_pg_ivm_extension()
def _ensure_pg_trgm_extension(self):
"""
确保 pg_trgm 扩展已启用。
该扩展用于 response_body 和 response_headers 字段的 GIN 索引,
支持高效的文本模糊搜索。
"""
from django.db import connection
# 检查是否为 PostgreSQL 数据库
if connection.vendor != 'postgresql':
logger.debug("跳过 pg_trgm 扩展:当前数据库不是 PostgreSQL")
return
try:
with connection.cursor() as cursor:
cursor.execute("CREATE EXTENSION IF NOT EXISTS pg_trgm;")
logger.debug("pg_trgm 扩展已启用")
except Exception as e:
# 记录错误但不阻止应用启动
# 常见原因:权限不足(需要超级用户权限)
logger.warning(
"无法创建 pg_trgm 扩展: %s"
"这可能导致 response_body 和 response_headers 字段的 GIN 索引无法正常工作。"
"请手动执行: CREATE EXTENSION IF NOT EXISTS pg_trgm;",
str(e)
)
def _verify_pg_ivm_extension(self):
"""
验证 pg_ivm 扩展是否可用。
pg_ivm 用于 IMMV增量维护物化视图是系统必需的扩展。
如果不可用,将记录错误并退出。
"""
from django.db import connection
# 检查是否为 PostgreSQL 数据库
if connection.vendor != 'postgresql':
logger.debug("跳过 pg_ivm 验证:当前数据库不是 PostgreSQL")
return
# 跳过某些管理命令(如 migrate、makemigrations
import sys
if len(sys.argv) > 1 and sys.argv[1] in ('migrate', 'makemigrations', 'collectstatic', 'check'):
logger.debug("跳过 pg_ivm 验证:当前为管理命令")
return
try:
with connection.cursor() as cursor:
# 检查 pg_ivm 扩展是否已安装
cursor.execute("""
SELECT COUNT(*) FROM pg_extension WHERE extname = 'pg_ivm'
""")
count = cursor.fetchone()[0]
if count > 0:
logger.info("✓ pg_ivm 扩展已启用")
else:
# 尝试创建扩展
try:
cursor.execute("CREATE EXTENSION IF NOT EXISTS pg_ivm;")
logger.info("✓ pg_ivm 扩展已创建并启用")
except Exception as create_error:
logger.error(
"=" * 60 + "\n"
"错误: pg_ivm 扩展未安装\n"
"=" * 60 + "\n"
"pg_ivm 是系统必需的扩展,用于增量维护物化视图。\n\n"
"请在 PostgreSQL 服务器上安装 pg_ivm\n"
" curl -sSL https://raw.githubusercontent.com/yyhuni/xingrin/main/docker/scripts/install-pg-ivm.sh | sudo bash\n\n"
"或手动安装:\n"
" 1. apt install build-essential postgresql-server-dev-15 git\n"
" 2. git clone https://github.com/sraoss/pg_ivm.git && cd pg_ivm && make && make install\n"
" 3. 在 postgresql.conf 中添加: shared_preload_libraries = 'pg_ivm'\n"
" 4. 重启 PostgreSQL\n"
"=" * 60
)
# 在生产环境中退出,开发环境中仅警告
from django.conf import settings
if not settings.DEBUG:
sys.exit(1)
except Exception as e:
logger.error(f"pg_ivm 扩展验证失败: {e}")

View File

@@ -6,6 +6,18 @@
包含:
1. asset_search_view - Website 搜索视图
2. endpoint_search_view - Endpoint 搜索视图
重要限制:
⚠️ pg_ivm 不支持数组类型字段ArrayField因为其使用 anyarray 伪类型进行比较时,
PostgreSQL 无法确定空数组的元素类型,导致错误:
"cannot determine element type of \"anyarray\" argument"
因此,所有 ArrayField 字段tech, matched_gf_patterns 等)已从 IMMV 中移除,
搜索时通过 JOIN 原表获取。
如需添加新的数组字段,请:
1. 不要将其包含在 IMMV 视图中
2. 在搜索服务中通过 JOIN 原表获取
"""
from django.db import migrations
@@ -18,7 +30,13 @@ class Migration(migrations.Migration):
]
operations = [
# 1. 确保 pg_ivm 扩展已启用
# 1. 确保 pg_trgm 扩展已启用(用于文本模糊搜索索引)
migrations.RunSQL(
sql="CREATE EXTENSION IF NOT EXISTS pg_trgm;",
reverse_sql="-- pg_trgm extension kept for other uses"
),
# 2. 确保 pg_ivm 扩展已启用(用于 IMMV 增量维护)
migrations.RunSQL(
sql="CREATE EXTENSION IF NOT EXISTS pg_ivm;",
reverse_sql="-- pg_ivm extension kept for other uses"
@@ -27,6 +45,8 @@ class Migration(migrations.Migration):
# ==================== Website IMMV ====================
# 2. 创建 asset_search_view IMMV
# ⚠️ 注意:不包含 w.tech 数组字段pg_ivm 不支持 ArrayField
# 数组字段通过 search_service.py 中 JOIN website 表获取
migrations.RunSQL(
sql="""
SELECT pgivm.create_immv('asset_search_view', $$
@@ -35,7 +55,6 @@ class Migration(migrations.Migration):
w.url,
w.host,
w.title,
w.tech,
w.status_code,
w.response_headers,
w.response_body,
@@ -79,10 +98,6 @@ class Migration(migrations.Migration):
CREATE INDEX IF NOT EXISTS asset_search_view_body_trgm_idx
ON asset_search_view USING gin (response_body gin_trgm_ops);
-- tech 数组索引
CREATE INDEX IF NOT EXISTS asset_search_view_tech_idx
ON asset_search_view USING gin (tech);
-- status_code 索引
CREATE INDEX IF NOT EXISTS asset_search_view_status_idx
ON asset_search_view (status_code);
@@ -98,7 +113,6 @@ class Migration(migrations.Migration):
DROP INDEX IF EXISTS asset_search_view_url_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_headers_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_body_trgm_idx;
DROP INDEX IF EXISTS asset_search_view_tech_idx;
DROP INDEX IF EXISTS asset_search_view_status_idx;
DROP INDEX IF EXISTS asset_search_view_created_idx;
"""
@@ -107,6 +121,8 @@ class Migration(migrations.Migration):
# ==================== Endpoint IMMV ====================
# 4. 创建 endpoint_search_view IMMV
# ⚠️ 注意:不包含 e.tech 和 e.matched_gf_patterns 数组字段pg_ivm 不支持 ArrayField
# 数组字段通过 search_service.py 中 JOIN endpoint 表获取
migrations.RunSQL(
sql="""
SELECT pgivm.create_immv('endpoint_search_view', $$
@@ -115,7 +131,6 @@ class Migration(migrations.Migration):
e.url,
e.host,
e.title,
e.tech,
e.status_code,
e.response_headers,
e.response_body,
@@ -124,7 +139,6 @@ class Migration(migrations.Migration):
e.webserver,
e.location,
e.vhost,
e.matched_gf_patterns,
e.created_at,
e.target_id
FROM endpoint e
@@ -160,10 +174,6 @@ class Migration(migrations.Migration):
CREATE INDEX IF NOT EXISTS endpoint_search_view_body_trgm_idx
ON endpoint_search_view USING gin (response_body gin_trgm_ops);
-- tech 数组索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_tech_idx
ON endpoint_search_view USING gin (tech);
-- status_code 索引
CREATE INDEX IF NOT EXISTS endpoint_search_view_status_idx
ON endpoint_search_view (status_code);
@@ -179,7 +189,6 @@ class Migration(migrations.Migration):
DROP INDEX IF EXISTS endpoint_search_view_url_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_headers_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_body_trgm_idx;
DROP INDEX IF EXISTS endpoint_search_view_tech_idx;
DROP INDEX IF EXISTS endpoint_search_view_status_idx;
DROP INDEX IF EXISTS endpoint_search_view_created_idx;
"""

View File

@@ -11,7 +11,7 @@
import logging
import re
from typing import Optional, List, Dict, Any, Tuple, Literal
from typing import Optional, List, Dict, Any, Tuple, Literal, Iterator
from django.db import connection
@@ -37,46 +37,55 @@ VIEW_MAPPING = {
'endpoint': 'endpoint_search_view',
}
# 资产类型到原表名的映射(用于 JOIN 获取数组字段)
# ⚠️ 重要pg_ivm 不支持 ArrayField所有数组字段必须从原表 JOIN 获取
TABLE_MAPPING = {
'website': 'website',
'endpoint': 'endpoint',
}
# 有效的资产类型
VALID_ASSET_TYPES = {'website', 'endpoint'}
# Website 查询字段
# Website 查询字段v=视图t=原表)
# ⚠️ 注意t.tech 从原表获取,因为 pg_ivm 不支持 ArrayField
WEBSITE_SELECT_FIELDS = """
id,
url,
host,
title,
tech,
status_code,
response_headers,
response_body,
content_type,
content_length,
webserver,
location,
vhost,
created_at,
target_id
v.id,
v.url,
v.host,
v.title,
t.tech, -- ArrayField从 website 表 JOIN 获取
v.status_code,
v.response_headers,
v.response_body,
v.content_type,
v.content_length,
v.webserver,
v.location,
v.vhost,
v.created_at,
v.target_id
"""
# Endpoint 查询字段(包含 matched_gf_patterns
# Endpoint 查询字段
# ⚠️ 注意t.tech 和 t.matched_gf_patterns 从原表获取,因为 pg_ivm 不支持 ArrayField
ENDPOINT_SELECT_FIELDS = """
id,
url,
host,
title,
tech,
status_code,
response_headers,
response_body,
content_type,
content_length,
webserver,
location,
vhost,
matched_gf_patterns,
created_at,
target_id
v.id,
v.url,
v.host,
v.title,
t.tech, -- ArrayField从 endpoint 表 JOIN 获取
v.status_code,
v.response_headers,
v.response_body,
v.content_type,
v.content_length,
v.webserver,
v.location,
v.vhost,
t.matched_gf_patterns, -- ArrayField从 endpoint 表 JOIN 获取
v.created_at,
v.target_id
"""
@@ -119,8 +128,8 @@ class SearchQueryParser:
# 检查是否包含操作符语法,如果不包含则作为 host 模糊搜索
if not cls.CONDITION_PATTERN.search(query):
# 裸文本,默认作为 host 模糊搜索
return "host ILIKE %s", [f"%{query}%"]
# 裸文本,默认作为 host 模糊搜索v 是视图别名)
return "v.host ILIKE %s", [f"%{query}%"]
# 按 || 分割为 OR 组
or_groups = cls._split_by_or(query)
@@ -273,45 +282,45 @@ class SearchQueryParser:
def _build_like_condition(cls, field: str, value: str, is_array: bool) -> Tuple[str, List[Any]]:
"""构建模糊匹配条件"""
if is_array:
# 数组字段:检查数组中是否有元素包含该值
return f"EXISTS (SELECT 1 FROM unnest({field}) AS t WHERE t ILIKE %s)", [f"%{value}%"]
# 数组字段:检查数组中是否有元素包含该值(从原表 t 获取)
return f"EXISTS (SELECT 1 FROM unnest(t.{field}) AS elem WHERE elem ILIKE %s)", [f"%{value}%"]
elif field == 'status_code':
# 状态码是整数,模糊匹配转为精确匹配
try:
return f"{field} = %s", [int(value)]
return f"v.{field} = %s", [int(value)]
except ValueError:
return f"{field}::text ILIKE %s", [f"%{value}%"]
return f"v.{field}::text ILIKE %s", [f"%{value}%"]
else:
return f"{field} ILIKE %s", [f"%{value}%"]
return f"v.{field} ILIKE %s", [f"%{value}%"]
@classmethod
def _build_exact_condition(cls, field: str, value: str, is_array: bool) -> Tuple[str, List[Any]]:
"""构建精确匹配条件"""
if is_array:
# 数组字段:检查数组中是否包含该精确值
return f"%s = ANY({field})", [value]
# 数组字段:检查数组中是否包含该精确值(从原表 t 获取)
return f"%s = ANY(t.{field})", [value]
elif field == 'status_code':
# 状态码是整数
try:
return f"{field} = %s", [int(value)]
return f"v.{field} = %s", [int(value)]
except ValueError:
return f"{field}::text = %s", [value]
return f"v.{field}::text = %s", [value]
else:
return f"{field} = %s", [value]
return f"v.{field} = %s", [value]
@classmethod
def _build_not_equal_condition(cls, field: str, value: str, is_array: bool) -> Tuple[str, List[Any]]:
"""构建不等于条件"""
if is_array:
# 数组字段:检查数组中不包含该值
return f"NOT (%s = ANY({field}))", [value]
# 数组字段:检查数组中不包含该值(从原表 t 获取)
return f"NOT (%s = ANY(t.{field}))", [value]
elif field == 'status_code':
try:
return f"({field} IS NULL OR {field} != %s)", [int(value)]
return f"(v.{field} IS NULL OR v.{field} != %s)", [int(value)]
except ValueError:
return f"({field} IS NULL OR {field}::text != %s)", [value]
return f"(v.{field} IS NULL OR v.{field}::text != %s)", [value]
else:
return f"({field} IS NULL OR {field} != %s)", [value]
return f"(v.{field} IS NULL OR v.{field} != %s)", [value]
AssetType = Literal['website', 'endpoint']
@@ -339,15 +348,18 @@ class AssetSearchService:
"""
where_clause, params = SearchQueryParser.parse(query)
# 根据资产类型选择视图和字段
# 根据资产类型选择视图、原表和字段
view_name = VIEW_MAPPING.get(asset_type, 'asset_search_view')
table_name = TABLE_MAPPING.get(asset_type, 'website')
select_fields = ENDPOINT_SELECT_FIELDS if asset_type == 'endpoint' else WEBSITE_SELECT_FIELDS
# JOIN 原表获取数组字段tech, matched_gf_patterns
sql = f"""
SELECT {select_fields}
FROM {view_name}
FROM {view_name} v
JOIN {table_name} t ON v.id = t.id
WHERE {where_clause}
ORDER BY created_at DESC
ORDER BY v.created_at DESC
"""
# 添加 LIMIT
@@ -369,28 +381,97 @@ class AssetSearchService:
logger.error(f"搜索查询失败: {e}, SQL: {sql}, params: {params}")
raise
def count(self, query: str, asset_type: AssetType = 'website') -> int:
def count(self, query: str, asset_type: AssetType = 'website', statement_timeout_ms: int = 300000) -> int:
"""
统计搜索结果数量
Args:
query: 搜索查询字符串
asset_type: 资产类型 ('website''endpoint')
statement_timeout_ms: SQL 语句超时时间(毫秒),默认 5 分钟
Returns:
int: 结果总数
"""
where_clause, params = SearchQueryParser.parse(query)
# 根据资产类型选择视图
# 根据资产类型选择视图和原表
view_name = VIEW_MAPPING.get(asset_type, 'asset_search_view')
table_name = TABLE_MAPPING.get(asset_type, 'website')
sql = f"SELECT COUNT(*) FROM {view_name} WHERE {where_clause}"
# JOIN 原表以支持数组字段查询
sql = f"SELECT COUNT(*) FROM {view_name} v JOIN {table_name} t ON v.id = t.id WHERE {where_clause}"
try:
with connection.cursor() as cursor:
# 为导出设置更长的超时时间(仅影响当前会话)
cursor.execute(f"SET LOCAL statement_timeout = {statement_timeout_ms}")
cursor.execute(sql, params)
return cursor.fetchone()[0]
except Exception as e:
logger.error(f"统计查询失败: {e}")
raise
def search_iter(
self,
query: str,
asset_type: AssetType = 'website',
batch_size: int = 1000,
statement_timeout_ms: int = 300000
) -> Iterator[Dict[str, Any]]:
"""
流式搜索资产(使用分批查询,内存友好)
Args:
query: 搜索查询字符串
asset_type: 资产类型 ('website''endpoint')
batch_size: 每批获取的数量
statement_timeout_ms: SQL 语句超时时间(毫秒),默认 5 分钟
Yields:
Dict: 单条搜索结果
"""
where_clause, params = SearchQueryParser.parse(query)
# 根据资产类型选择视图、原表和字段
view_name = VIEW_MAPPING.get(asset_type, 'asset_search_view')
table_name = TABLE_MAPPING.get(asset_type, 'website')
select_fields = ENDPOINT_SELECT_FIELDS if asset_type == 'endpoint' else WEBSITE_SELECT_FIELDS
# 使用 OFFSET/LIMIT 分批查询Django 不支持命名游标)
offset = 0
try:
while True:
# JOIN 原表获取数组字段
sql = f"""
SELECT {select_fields}
FROM {view_name} v
JOIN {table_name} t ON v.id = t.id
WHERE {where_clause}
ORDER BY v.created_at DESC
LIMIT {batch_size} OFFSET {offset}
"""
with connection.cursor() as cursor:
# 为导出设置更长的超时时间(仅影响当前会话)
cursor.execute(f"SET LOCAL statement_timeout = {statement_timeout_ms}")
cursor.execute(sql, params)
columns = [col[0] for col in cursor.description]
rows = cursor.fetchall()
if not rows:
break
for row in rows:
yield dict(zip(columns, row))
# 如果返回的行数少于 batch_size说明已经是最后一批
if len(rows) < batch_size:
break
offset += batch_size
except Exception as e:
logger.error(f"流式搜索查询失败: {e}, SQL: {sql}, params: {params}")
raise

View File

@@ -1,7 +0,0 @@
"""
Asset 应用的任务模块
注意:物化视图刷新已移至 APScheduler 定时任务apps.engine.scheduler
"""
__all__ = []

View File

@@ -8,7 +8,6 @@ from rest_framework.request import Request
from rest_framework.exceptions import NotFound, ValidationError as DRFValidationError
from django.core.exceptions import ValidationError, ObjectDoesNotExist
from django.db import DatabaseError, IntegrityError, OperationalError
from django.http import StreamingHttpResponse
from ..serializers import (
SubdomainListSerializer, WebSiteSerializer, DirectorySerializer,
@@ -243,7 +242,7 @@ class SubdomainViewSet(viewsets.ModelViewSet):
CSV 列name, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -254,12 +253,12 @@ class SubdomainViewSet(viewsets.ModelViewSet):
headers = ['name', 'created_at']
formatters = {'created_at': format_datetime}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-subdomains.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-subdomains.csv"'
return response
class WebSiteViewSet(viewsets.ModelViewSet):
@@ -369,7 +368,7 @@ class WebSiteViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -387,12 +386,12 @@ class WebSiteViewSet(viewsets.ModelViewSet):
'tech': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-websites.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-websites.csv"'
return response
class DirectoryViewSet(viewsets.ModelViewSet):
@@ -499,7 +498,7 @@ class DirectoryViewSet(viewsets.ModelViewSet):
CSV 列url, status, content_length, words, lines, content_type, duration, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -515,12 +514,12 @@ class DirectoryViewSet(viewsets.ModelViewSet):
'created_at': format_datetime,
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-directories.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-directories.csv"'
return response
class EndpointViewSet(viewsets.ModelViewSet):
@@ -630,7 +629,7 @@ class EndpointViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, matched_gf_patterns, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -649,12 +648,12 @@ class EndpointViewSet(viewsets.ModelViewSet):
'matched_gf_patterns': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-endpoints.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-endpoints.csv"'
return response
class HostPortMappingViewSet(viewsets.ModelViewSet):
@@ -707,7 +706,7 @@ class HostPortMappingViewSet(viewsets.ModelViewSet):
CSV 列ip, host, port, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
target_pk = self.kwargs.get('target_pk')
if not target_pk:
@@ -722,14 +721,12 @@ class HostPortMappingViewSet(viewsets.ModelViewSet):
'created_at': format_datetime
}
# 生成流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"target-{target_pk}-ip-addresses.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="target-{target_pk}-ip-addresses.csv"'
return response
class VulnerabilityViewSet(viewsets.ModelViewSet):
@@ -801,7 +798,7 @@ class SubdomainSnapshotViewSet(viewsets.ModelViewSet):
CSV 列name, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -812,12 +809,12 @@ class SubdomainSnapshotViewSet(viewsets.ModelViewSet):
headers = ['name', 'created_at']
formatters = {'created_at': format_datetime}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-subdomains.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-subdomains.csv"'
return response
class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
@@ -855,7 +852,7 @@ class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -873,12 +870,12 @@ class WebsiteSnapshotViewSet(viewsets.ModelViewSet):
'tech': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-websites.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-websites.csv"'
return response
class DirectorySnapshotViewSet(viewsets.ModelViewSet):
@@ -913,7 +910,7 @@ class DirectorySnapshotViewSet(viewsets.ModelViewSet):
CSV 列url, status, content_length, words, lines, content_type, duration, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -929,12 +926,12 @@ class DirectorySnapshotViewSet(viewsets.ModelViewSet):
'created_at': format_datetime,
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-directories.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-directories.csv"'
return response
class EndpointSnapshotViewSet(viewsets.ModelViewSet):
@@ -972,7 +969,7 @@ class EndpointSnapshotViewSet(viewsets.ModelViewSet):
CSV 列url, host, location, title, status_code, content_length, content_type, webserver, tech, response_body, response_headers, vhost, matched_gf_patterns, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime, format_list_field
from apps.common.utils import create_csv_export_response, format_datetime, format_list_field
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -991,12 +988,12 @@ class EndpointSnapshotViewSet(viewsets.ModelViewSet):
'matched_gf_patterns': lambda x: format_list_field(x, separator=','),
}
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-endpoints.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-endpoints.csv"'
return response
class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
@@ -1031,7 +1028,7 @@ class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
CSV 列ip, host, port, created_at
"""
from apps.common.utils import generate_csv_rows, format_datetime
from apps.common.utils import create_csv_export_response, format_datetime
scan_pk = self.kwargs.get('scan_pk')
if not scan_pk:
@@ -1046,14 +1043,12 @@ class HostPortMappingSnapshotViewSet(viewsets.ModelViewSet):
'created_at': format_datetime
}
# 生成流式响应
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, formatters),
content_type='text/csv; charset=utf-8'
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=f"scan-{scan_pk}-ip-addresses.csv",
field_formatters=formatters
)
response['Content-Disposition'] = f'attachment; filename="scan-{scan_pk}-ip-addresses.csv"'
return response
class VulnerabilitySnapshotViewSet(viewsets.ModelViewSet):

View File

@@ -28,14 +28,11 @@
import logging
import json
import csv
from io import StringIO
from datetime import datetime
from urllib.parse import urlparse, urlunparse
from rest_framework import status
from rest_framework.views import APIView
from rest_framework.request import Request
from django.http import StreamingHttpResponse
from django.db import connection
from apps.common.response_helpers import success_response, error_response
@@ -287,76 +284,37 @@ class AssetSearchExportView(APIView):
asset_type: 资产类型 ('website''endpoint',默认 'website')
Response:
CSV 文件
CSV 文件(带 Content-Length支持浏览器显示下载进度
"""
# 导出数量限制
MAX_EXPORT_ROWS = 10000
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.service = AssetSearchService()
def _parse_headers(self, headers_data) -> str:
"""解析响应头为字符串"""
if not headers_data:
return ''
try:
headers = json.loads(headers_data)
return '; '.join(f'{k}: {v}' for k, v in headers.items())
except (json.JSONDecodeError, TypeError):
return str(headers_data)
def _generate_csv(self, results: list, asset_type: str):
"""生成 CSV 内容的生成器"""
# 定义列
def _get_headers_and_formatters(self, asset_type: str):
"""获取 CSV 表头和格式化器"""
from apps.common.utils import format_datetime, format_list_field
if asset_type == 'website':
columns = ['url', 'host', 'title', 'status_code', 'content_type', 'content_length',
headers = ['url', 'host', 'title', 'status_code', 'content_type', 'content_length',
'webserver', 'location', 'tech', 'vhost', 'created_at']
headers = ['URL', 'Host', 'Title', 'Status', 'Content-Type', 'Content-Length',
'Webserver', 'Location', 'Technologies', 'VHost', 'Created At']
else:
columns = ['url', 'host', 'title', 'status_code', 'content_type', 'content_length',
headers = ['url', 'host', 'title', 'status_code', 'content_type', 'content_length',
'webserver', 'location', 'tech', 'matched_gf_patterns', 'vhost', 'created_at']
headers = ['URL', 'Host', 'Title', 'Status', 'Content-Type', 'Content-Length',
'Webserver', 'Location', 'Technologies', 'GF Patterns', 'VHost', 'Created At']
# 写入 BOM 和表头
output = StringIO()
writer = csv.writer(output)
formatters = {
'created_at': format_datetime,
'tech': lambda x: format_list_field(x, separator='; '),
'matched_gf_patterns': lambda x: format_list_field(x, separator='; '),
'vhost': lambda x: 'true' if x else ('false' if x is False else ''),
}
# UTF-8 BOM
yield '\ufeff'
# 表头
writer.writerow(headers)
yield output.getvalue()
output.seek(0)
output.truncate(0)
# 数据行
for result in results:
row = []
for col in columns:
value = result.get(col)
if col == 'tech' or col == 'matched_gf_patterns':
# 数组转字符串
row.append('; '.join(value) if value else '')
elif col == 'created_at':
# 日期格式化
row.append(value.strftime('%Y-%m-%d %H:%M:%S') if value else '')
elif col == 'vhost':
row.append('true' if value else 'false' if value is False else '')
else:
row.append(str(value) if value is not None else '')
writer.writerow(row)
yield output.getvalue()
output.seek(0)
output.truncate(0)
return headers, formatters
def get(self, request: Request):
"""导出搜索结果为 CSV"""
"""导出搜索结果为 CSV(带 Content-Length支持下载进度显示"""
from apps.common.utils import create_csv_export_response
# 获取搜索查询
query = request.query_params.get('q', '').strip()
@@ -376,25 +334,28 @@ class AssetSearchExportView(APIView):
status_code=status.HTTP_400_BAD_REQUEST
)
# 获取搜索结果(限制数量
results = self.service.search(query, asset_type, limit=self.MAX_EXPORT_ROWS)
if not results:
# 检查是否有结果(快速检查,避免空导出
total = self.service.count(query, asset_type)
if total == 0:
return error_response(
code=ErrorCodes.NOT_FOUND,
message='No results to export',
status_code=status.HTTP_404_NOT_FOUND
)
# 获取表头和格式化器
headers, formatters = self._get_headers_and_formatters(asset_type)
# 生成文件名
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
filename = f'search_{asset_type}_{timestamp}.csv'
# 返回流式响应
response = StreamingHttpResponse(
self._generate_csv(results, asset_type),
content_type='text/csv; charset=utf-8'
# 使用通用导出工具
data_iterator = self.service.search_iter(query, asset_type)
return create_csv_export_response(
data_iterator=data_iterator,
headers=headers,
filename=filename,
field_formatters=formatters,
show_progress=True # 显示下载进度
)
response['Content-Disposition'] = f'attachment; filename="{filename}"'
return response

View File

@@ -11,6 +11,7 @@ from .csv_utils import (
generate_csv_rows,
format_list_field,
format_datetime,
create_csv_export_response,
UTF8_BOM,
)
@@ -24,5 +25,6 @@ __all__ = [
'generate_csv_rows',
'format_list_field',
'format_datetime',
'create_csv_export_response',
'UTF8_BOM',
]

View File

@@ -4,13 +4,21 @@
- UTF-8 BOMExcel 兼容)
- RFC 4180 规范转义
- 流式生成(内存友好)
- 带 Content-Length 的文件响应(支持浏览器下载进度显示)
"""
import csv
import io
import os
import tempfile
import logging
from datetime import datetime
from typing import Iterator, Dict, Any, List, Callable, Optional
from django.http import FileResponse, StreamingHttpResponse
logger = logging.getLogger(__name__)
# UTF-8 BOM确保 Excel 正确识别编码
UTF8_BOM = '\ufeff'
@@ -114,3 +122,123 @@ def format_datetime(dt: Optional[datetime]) -> str:
dt = timezone.localtime(dt)
return dt.strftime('%Y-%m-%d %H:%M:%S')
def create_csv_export_response(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
filename: str,
field_formatters: Optional[Dict[str, Callable]] = None,
show_progress: bool = True
) -> FileResponse | StreamingHttpResponse:
"""
创建 CSV 导出响应
根据 show_progress 参数选择响应类型:
- True: 使用临时文件 + FileResponse带 Content-Length浏览器显示下载进度
- False: 使用 StreamingHttpResponse内存更友好但无下载进度
Args:
data_iterator: 数据迭代器,每个元素是一个字典
headers: CSV 表头列表
filename: 下载文件名(如 "export_2024.csv"
field_formatters: 字段格式化函数字典
show_progress: 是否显示下载进度(默认 True
Returns:
FileResponse 或 StreamingHttpResponse
Example:
>>> data_iter = service.iter_data()
>>> headers = ['url', 'host', 'created_at']
>>> formatters = {'created_at': format_datetime}
>>> response = create_csv_export_response(
... data_iter, headers, 'websites.csv', formatters
... )
>>> return response
"""
if show_progress:
return _create_file_response(data_iterator, headers, filename, field_formatters)
else:
return _create_streaming_response(data_iterator, headers, filename, field_formatters)
def _create_file_response(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
filename: str,
field_formatters: Optional[Dict[str, Callable]] = None
) -> FileResponse:
"""
创建带 Content-Length 的文件响应(支持浏览器下载进度)
实现方式:先写入临时文件,再返回 FileResponse
"""
# 创建临时文件
temp_file = tempfile.NamedTemporaryFile(
mode='w',
suffix='.csv',
delete=False,
encoding='utf-8'
)
temp_path = temp_file.name
try:
# 流式写入 CSV 数据到临时文件
for row in generate_csv_rows(data_iterator, headers, field_formatters):
temp_file.write(row)
temp_file.close()
# 获取文件大小
file_size = os.path.getsize(temp_path)
# 创建文件响应
response = FileResponse(
open(temp_path, 'rb'),
content_type='text/csv; charset=utf-8',
as_attachment=True,
filename=filename
)
response['Content-Length'] = file_size
# 设置清理回调:响应完成后删除临时文件
original_close = response.file_to_stream.close
def close_and_cleanup():
original_close()
try:
os.unlink(temp_path)
except OSError:
pass
response.file_to_stream.close = close_and_cleanup
return response
except Exception as e:
# 清理临时文件
try:
temp_file.close()
except:
pass
try:
os.unlink(temp_path)
except OSError:
pass
logger.error(f"创建 CSV 导出响应失败: {e}")
raise
def _create_streaming_response(
data_iterator: Iterator[Dict[str, Any]],
headers: List[str],
filename: str,
field_formatters: Optional[Dict[str, Callable]] = None
) -> StreamingHttpResponse:
"""
创建流式响应(无 Content-Length内存更友好
"""
response = StreamingHttpResponse(
generate_csv_rows(data_iterator, headers, field_formatters),
content_type='text/csv; charset=utf-8'
)
response['Content-Disposition'] = f'attachment; filename="{filename}"'
return response

View File

@@ -33,7 +33,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command, ensure_wordlist_local
from apps.scan.utils import config_parser, build_scan_command, ensure_wordlist_local, user_log
logger = logging.getLogger(__name__)
@@ -413,6 +413,7 @@ def _run_scans_concurrently(
logger.info("="*60)
logger.info("使用工具: %s (并发模式, max_workers=%d)", tool_name, max_workers)
logger.info("="*60)
user_log(scan_id, "directory_scan", f"Running {tool_name}")
# 如果配置了 wordlist_name则先确保本地存在对应的字典文件含 hash 校验)
wordlist_name = tool_config.get('wordlist_name')
@@ -467,6 +468,11 @@ def _run_scans_concurrently(
total_tasks = len(scan_params_list)
logger.info("开始分批执行 %d 个扫描任务(每批 %d 个)...", total_tasks, max_workers)
# 进度里程碑跟踪
last_progress_percent = 0
tool_directories = 0
tool_processed = 0
batch_num = 0
for batch_start in range(0, total_tasks, max_workers):
batch_end = min(batch_start + max_workers, total_tasks)
@@ -498,7 +504,9 @@ def _run_scans_concurrently(
result = future.result() # 阻塞等待单个任务完成
directories_found = result.get('created_directories', 0)
total_directories += directories_found
tool_directories += directories_found
processed_sites_count += 1
tool_processed += 1
logger.info(
"✓ [%d/%d] 站点扫描完成: %s - 发现 %d 个目录",
@@ -517,6 +525,19 @@ def _run_scans_concurrently(
"✗ [%d/%d] 站点扫描失败: %s - 错误: %s",
idx, len(sites), site_url, exc
)
# 进度里程碑:每 20% 输出一次
current_progress = int((batch_end / total_tasks) * 100)
if current_progress >= last_progress_percent + 20:
user_log(scan_id, "directory_scan", f"Progress: {batch_end}/{total_tasks} sites scanned")
last_progress_percent = (current_progress // 20) * 20
# 工具完成日志(开发者日志 + 用户日志)
logger.info(
"✓ 工具 %s 执行完成 - 已处理站点: %d/%d, 发现目录: %d",
tool_name, tool_processed, total_tasks, tool_directories
)
user_log(scan_id, "directory_scan", f"{tool_name} completed: found {tool_directories} directories")
# 输出汇总信息
if failed_sites:
@@ -605,6 +626,8 @@ def directory_scan_flow(
"="*60
)
user_log(scan_id, "directory_scan", "Starting directory scan")
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
@@ -625,7 +648,8 @@ def directory_scan_flow(
sites_file, site_count = _export_site_urls(target_id, target_name, directory_scan_dir)
if site_count == 0:
logger.warning("目标下没有站点,跳过目录扫描")
logger.warning("跳过目录扫描:没有站点可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "directory_scan", "Skipped: no sites to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -664,7 +688,9 @@ def directory_scan_flow(
logger.warning("所有站点扫描均失败 - 总站点数: %d, 失败数: %d", site_count, len(failed_sites))
# 不抛出异常,让扫描继续
logger.info("="*60 + "\n✓ 目录扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("✓ 目录扫描完成 - 发现目录: %d", total_directories)
user_log(scan_id, "directory_scan", f"directory_scan completed: found {total_directories} directories")
return {
'success': True,

View File

@@ -29,7 +29,7 @@ from apps.scan.tasks.fingerprint_detect import (
export_urls_for_fingerprint_task,
run_xingfinger_and_stream_update_tech_task,
)
from apps.scan.utils import build_scan_command
from apps.scan.utils import build_scan_command, user_log
from apps.scan.utils.fingerprint_helpers import get_fingerprint_paths
logger = logging.getLogger(__name__)
@@ -168,6 +168,7 @@ def _run_fingerprint_detect(
"开始执行 %s 指纹识别 - URL数: %d, 超时: %ds, 指纹库: %s",
tool_name, url_count, timeout, list(fingerprint_paths.keys())
)
user_log(scan_id, "fingerprint_detect", f"Running {tool_name}: {command}")
# 6. 执行扫描任务
try:
@@ -190,17 +191,21 @@ def _run_fingerprint_detect(
'fingerprint_libs': list(fingerprint_paths.keys())
}
tool_updated = result.get('updated_count', 0)
logger.info(
"✓ 工具 %s 执行完成 - 处理记录: %d, 更新: %d, 未找到: %d",
tool_name,
result.get('processed_records', 0),
result.get('updated_count', 0),
tool_updated,
result.get('not_found_count', 0)
)
user_log(scan_id, "fingerprint_detect", f"{tool_name} completed: identified {tool_updated} fingerprints")
except Exception as exc:
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "fingerprint_detect", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
@@ -272,6 +277,8 @@ def fingerprint_detect_flow(
"="*60
)
user_log(scan_id, "fingerprint_detect", "Starting fingerprint detection")
# 参数验证
if scan_id is None:
raise ValueError("scan_id 不能为空")
@@ -293,7 +300,8 @@ def fingerprint_detect_flow(
urls_file, url_count = _export_urls(target_id, fingerprint_dir, source)
if url_count == 0:
logger.warning("目标下没有可用的 URL跳过指纹识别")
logger.warning("跳过指纹识别:没有 URL 可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "fingerprint_detect", "Skipped: no URLs to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -332,8 +340,6 @@ def fingerprint_detect_flow(
source=source
)
logger.info("="*60 + "\n✓ 指纹识别完成\n" + "="*60)
# 动态生成已执行的任务列表
executed_tasks = ['export_urls_for_fingerprint']
executed_tasks.extend([f'run_xingfinger ({tool})' for tool in tool_stats.keys()])
@@ -344,6 +350,10 @@ def fingerprint_detect_flow(
total_created = sum(stats['result'].get('created_count', 0) for stats in tool_stats.values())
total_snapshots = sum(stats['result'].get('snapshot_count', 0) for stats in tool_stats.values())
# 记录 Flow 完成
logger.info("✓ 指纹识别完成 - 识别指纹: %d", total_updated)
user_log(scan_id, "fingerprint_detect", f"fingerprint_detect completed: identified {total_updated} fingerprints")
successful_tools = [name for name in enabled_tools.keys()
if name not in [f['tool'] for f in failed_tools]]

View File

@@ -115,7 +115,7 @@ def initiate_scan_flow(
# ==================== Task 2: 获取引擎配置 ====================
from apps.scan.models import Scan
scan = Scan.objects.get(id=scan_id)
engine_config = scan.merged_configuration
engine_config = scan.yaml_configuration
# 使用 engine_names 进行显示
display_engine_name = ', '.join(scan.engine_names) if scan.engine_names else engine_name

View File

@@ -28,7 +28,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command
from apps.scan.utils import config_parser, build_scan_command, user_log
logger = logging.getLogger(__name__)
@@ -265,6 +265,7 @@ def _run_scans_sequentially(
# 3. 执行扫描任务
logger.info("开始执行 %s 扫描(超时: %d秒)...", tool_name, config_timeout)
user_log(scan_id, "port_scan", f"Running {tool_name}: {command}")
try:
# 直接调用 task串行执行
@@ -286,26 +287,31 @@ def _run_scans_sequentially(
'result': result,
'timeout': config_timeout
}
processed_records += result.get('processed_records', 0)
tool_records = result.get('processed_records', 0)
processed_records += tool_records
logger.info(
"✓ 工具 %s 流式处理完成 - 记录数: %d",
tool_name, result.get('processed_records', 0)
tool_name, tool_records
)
user_log(scan_id, "port_scan", f"{tool_name} completed: found {tool_records} ports")
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
# 注意:流式处理任务超时时,已解析的数据已保存到数据库
reason = f"执行超时(配置: {config_timeout}秒)"
reason = f"timeout after {config_timeout}s"
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.warning(
"⚠️ 工具 %s 执行超时 - 超时配置: %d\n"
"注意:超时前已解析的端口数据已保存到数据库,但扫描未完全完成。",
tool_name, config_timeout
)
user_log(scan_id, "port_scan", f"{tool_name} failed: {reason}", "error")
except Exception as exc:
# 其他异常
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "port_scan", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
@@ -420,6 +426,8 @@ def port_scan_flow(
"="*60
)
user_log(scan_id, "port_scan", "Starting port scan")
# Step 0: 创建工作目录
from apps.scan.utils import setup_scan_directory
port_scan_dir = setup_scan_directory(scan_workspace_dir, 'port_scan')
@@ -428,7 +436,8 @@ def port_scan_flow(
targets_file, target_count, target_type = _export_scan_targets(target_id, port_scan_dir)
if target_count == 0:
logger.warning("目标下没有可扫描的地址,跳过端口扫描")
logger.warning("跳过端口扫描:没有目标可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "port_scan", "Skipped: no targets to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -467,7 +476,9 @@ def port_scan_flow(
target_name=target_name
)
logger.info("="*60 + "\n✓ 端口扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("✓ 端口扫描完成 - 发现端口: %d", processed_records)
user_log(scan_id, "port_scan", f"port_scan completed: found {processed_records} ports")
# 动态生成已执行的任务列表
executed_tasks = ['export_scan_targets', 'parse_config']

View File

@@ -17,6 +17,7 @@ from apps.common.prefect_django_setup import setup_django_for_prefect
import logging
import os
import subprocess
import time
from pathlib import Path
from typing import Callable
from prefect import flow
@@ -26,7 +27,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import config_parser, build_scan_command
from apps.scan.utils import config_parser, build_scan_command, user_log
logger = logging.getLogger(__name__)
@@ -198,6 +199,7 @@ def _run_scans_sequentially(
"开始执行 %s 站点扫描 - URL数: %d, 最终超时: %ds",
tool_name, total_urls, timeout
)
user_log(scan_id, "site_scan", f"Running {tool_name}: {command}")
# 3. 执行扫描任务
try:
@@ -218,29 +220,35 @@ def _run_scans_sequentially(
'result': result,
'timeout': timeout
}
processed_records += result.get('processed_records', 0)
tool_records = result.get('processed_records', 0)
tool_created = result.get('created_websites', 0)
processed_records += tool_records
logger.info(
"✓ 工具 %s 流式处理完成 - 处理记录: %d, 创建站点: %d, 跳过: %d",
tool_name,
result.get('processed_records', 0),
result.get('created_websites', 0),
tool_records,
tool_created,
result.get('skipped_no_subdomain', 0) + result.get('skipped_failed', 0)
)
user_log(scan_id, "site_scan", f"{tool_name} completed: found {tool_created} websites")
except subprocess.TimeoutExpired as exc:
# 超时异常单独处理
reason = f"执行超时(配置: {timeout}秒)"
reason = f"timeout after {timeout}s"
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.warning(
"⚠️ 工具 %s 执行超时 - 超时配置: %d\n"
"注意:超时前已解析的站点数据已保存到数据库,但扫描未完全完成。",
tool_name, timeout
)
user_log(scan_id, "site_scan", f"{tool_name} failed: {reason}", "error")
except Exception as exc:
# 其他异常
failed_tools.append({'tool': tool_name, 'reason': str(exc)})
reason = str(exc)
failed_tools.append({'tool': tool_name, 'reason': reason})
logger.error("工具 %s 执行失败: %s", tool_name, exc, exc_info=True)
user_log(scan_id, "site_scan", f"{tool_name} failed: {reason}", "error")
if failed_tools:
logger.warning(
@@ -379,6 +387,8 @@ def site_scan_flow(
if not scan_workspace_dir:
raise ValueError("scan_workspace_dir 不能为空")
user_log(scan_id, "site_scan", "Starting site scan")
# Step 0: 创建工作目录
from apps.scan.utils import setup_scan_directory
site_scan_dir = setup_scan_directory(scan_workspace_dir, 'site_scan')
@@ -389,7 +399,8 @@ def site_scan_flow(
)
if total_urls == 0:
logger.warning("目标下没有可用的站点URL,跳过站点扫描")
logger.warning("跳过站点扫描:没有站点 URL 可扫描 - Scan ID: %s", scan_id)
user_log(scan_id, "site_scan", "Skipped: no site URLs to scan", "warning")
return {
'success': True,
'scan_id': scan_id,
@@ -432,8 +443,6 @@ def site_scan_flow(
target_name=target_name
)
logger.info("="*60 + "\n✓ 站点扫描完成\n" + "="*60)
# 动态生成已执行的任务列表
executed_tasks = ['export_site_urls', 'parse_config']
executed_tasks.extend([f'run_and_stream_save_websites ({tool})' for tool in tool_stats.keys()])
@@ -443,6 +452,10 @@ def site_scan_flow(
total_skipped_no_subdomain = sum(stats['result'].get('skipped_no_subdomain', 0) for stats in tool_stats.values())
total_skipped_failed = sum(stats['result'].get('skipped_failed', 0) for stats in tool_stats.values())
# 记录 Flow 完成
logger.info("✓ 站点扫描完成 - 创建站点: %d", total_created)
user_log(scan_id, "site_scan", f"site_scan completed: found {total_created} websites")
return {
'success': True,
'scan_id': scan_id,

View File

@@ -30,7 +30,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_wordlist_local
from apps.scan.utils import build_scan_command, ensure_wordlist_local, user_log
from apps.engine.services.wordlist_service import WordlistService
from apps.common.normalizer import normalize_domain
from apps.common.validators import validate_domain
@@ -77,7 +77,8 @@ def _validate_and_normalize_target(target_name: str) -> str:
def _run_scans_parallel(
enabled_tools: dict,
domain_name: str,
result_dir: Path
result_dir: Path,
scan_id: int
) -> tuple[list, list, list]:
"""
并行运行所有启用的子域名扫描工具
@@ -86,6 +87,7 @@ def _run_scans_parallel(
enabled_tools: 启用的工具配置字典 {'tool_name': {'timeout': 600, ...}}
domain_name: 目标域名
result_dir: 结果输出目录
scan_id: 扫描任务 ID用于记录日志
Returns:
tuple: (result_files, failed_tools, successful_tool_names)
@@ -137,6 +139,9 @@ def _run_scans_parallel(
f"提交任务 - 工具: {tool_name}, 超时: {timeout}s, 输出: {output_file}"
)
# 记录工具开始执行日志
user_log(scan_id, "subdomain_discovery", f"Running {tool_name}: {command}")
future = run_subdomain_discovery_task.submit(
tool=tool_name,
command=command,
@@ -164,16 +169,19 @@ def _run_scans_parallel(
if result:
result_files.append(result)
logger.info("✓ 扫描工具 %s 执行成功: %s", tool_name, result)
user_log(scan_id, "subdomain_discovery", f"{tool_name} completed")
else:
failure_msg = f"{tool_name}: 未生成结果文件"
failures.append(failure_msg)
failed_tools.append({'tool': tool_name, 'reason': '未生成结果文件'})
logger.warning("⚠️ 扫描工具 %s 未生成结果文件", tool_name)
user_log(scan_id, "subdomain_discovery", f"{tool_name} failed: no output file", "error")
except Exception as e:
failure_msg = f"{tool_name}: {str(e)}"
failures.append(failure_msg)
failed_tools.append({'tool': tool_name, 'reason': str(e)})
logger.warning("⚠️ 扫描工具 %s 执行失败: %s", tool_name, str(e))
user_log(scan_id, "subdomain_discovery", f"{tool_name} failed: {str(e)}", "error")
# 4. 检查是否有成功的工具
if not result_files:
@@ -203,7 +211,8 @@ def _run_single_tool(
tool_config: dict,
command_params: dict,
result_dir: Path,
scan_type: str = 'subdomain_discovery'
scan_type: str = 'subdomain_discovery',
scan_id: int = None
) -> str:
"""
运行单个扫描工具
@@ -214,6 +223,7 @@ def _run_single_tool(
command_params: 命令参数
result_dir: 结果目录
scan_type: 扫描类型
scan_id: 扫描 ID用于记录用户日志
Returns:
str: 输出文件路径,失败返回空字符串
@@ -242,7 +252,9 @@ def _run_single_tool(
if timeout == 'auto':
timeout = 3600
logger.info(f"执行 {tool_name}: timeout={timeout}s")
logger.info(f"执行 {tool_name}: {command}")
if scan_id:
user_log(scan_id, scan_type, f"Running {tool_name}: {command}")
try:
result = run_subdomain_discovery_task(
@@ -401,7 +413,6 @@ def subdomain_discovery_flow(
logger.warning("目标域名无效,跳过子域名发现扫描: %s", e)
return _empty_result(scan_id, target_name, scan_workspace_dir)
# 验证成功后打印日志
logger.info(
"="*60 + "\n" +
"开始子域名发现扫描\n" +
@@ -410,6 +421,7 @@ def subdomain_discovery_flow(
f" Workspace: {scan_workspace_dir}\n" +
"="*60
)
user_log(scan_id, "subdomain_discovery", f"Starting subdomain discovery for {domain_name}")
# 解析配置
passive_tools = scan_config.get('passive_tools', {})
@@ -429,23 +441,22 @@ def subdomain_discovery_flow(
successful_tool_names = []
# ==================== Stage 1: 被动收集(并行)====================
logger.info("=" * 40)
logger.info("Stage 1: 被动收集(并行)")
logger.info("=" * 40)
if enabled_passive_tools:
logger.info("=" * 40)
logger.info("Stage 1: 被动收集(并行)")
logger.info("=" * 40)
logger.info("启用工具: %s", ', '.join(enabled_passive_tools.keys()))
user_log(scan_id, "subdomain_discovery", f"Stage 1: passive collection ({', '.join(enabled_passive_tools.keys())})")
result_files, stage1_failed, stage1_success = _run_scans_parallel(
enabled_tools=enabled_passive_tools,
domain_name=domain_name,
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
all_result_files.extend(result_files)
failed_tools.extend(stage1_failed)
successful_tool_names.extend(stage1_success)
executed_tasks.extend([f'passive ({tool})' for tool in stage1_success])
else:
logger.warning("未启用任何被动收集工具")
# 合并 Stage 1 结果
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
@@ -456,7 +467,6 @@ def subdomain_discovery_flow(
else:
# 创建空文件
Path(current_result).touch()
logger.warning("Stage 1 无结果,创建空文件")
# ==================== Stage 2: 字典爆破(可选)====================
bruteforce_enabled = bruteforce_config.get('enabled', False)
@@ -464,6 +474,7 @@ def subdomain_discovery_flow(
logger.info("=" * 40)
logger.info("Stage 2: 字典爆破")
logger.info("=" * 40)
user_log(scan_id, "subdomain_discovery", "Stage 2: bruteforce")
bruteforce_tool_config = bruteforce_config.get('subdomain_bruteforce', {})
wordlist_name = bruteforce_tool_config.get('wordlist_name', 'dns_wordlist.txt')
@@ -496,22 +507,16 @@ def subdomain_discovery_flow(
**bruteforce_tool_config,
'timeout': timeout_value,
}
logger.info(
"subdomain_bruteforce 使用自动 timeout: %s 秒 (字典行数=%s, 3秒/行)",
timeout_value,
line_count_int,
)
brute_output = str(result_dir / f"subs_brute_{timestamp}.txt")
brute_result = _run_single_tool(
tool_name='subdomain_bruteforce',
tool_config=bruteforce_tool_config,
command_params={
'domain': domain_name,
'wordlist': local_wordlist_path,
'output_file': brute_output
},
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
if brute_result:
@@ -522,11 +527,16 @@ def subdomain_discovery_flow(
)
successful_tool_names.append('subdomain_bruteforce')
executed_tasks.append('bruteforce')
logger.info("✓ subdomain_bruteforce 执行完成")
user_log(scan_id, "subdomain_discovery", "subdomain_bruteforce completed")
else:
failed_tools.append({'tool': 'subdomain_bruteforce', 'reason': '执行失败'})
logger.warning("⚠️ subdomain_bruteforce 执行失败")
user_log(scan_id, "subdomain_discovery", "subdomain_bruteforce failed: execution failed", "error")
except Exception as exc:
logger.warning("字典准备失败,跳过字典爆破: %s", exc)
failed_tools.append({'tool': 'subdomain_bruteforce', 'reason': str(exc)})
logger.warning("字典准备失败,跳过字典爆破: %s", exc)
user_log(scan_id, "subdomain_discovery", f"subdomain_bruteforce failed: {str(exc)}", "error")
# ==================== Stage 3: 变异生成 + 验证(可选)====================
permutation_enabled = permutation_config.get('enabled', False)
@@ -534,6 +544,7 @@ def subdomain_discovery_flow(
logger.info("=" * 40)
logger.info("Stage 3: 变异生成 + 存活验证(流式管道)")
logger.info("=" * 40)
user_log(scan_id, "subdomain_discovery", "Stage 3: permutation + resolve")
permutation_tool_config = permutation_config.get('subdomain_permutation_resolve', {})
@@ -587,20 +598,19 @@ def subdomain_discovery_flow(
'tool': 'subdomain_permutation_resolve',
'reason': f"采样检测到泛解析 (膨胀率 {ratio:.1f}x)"
})
user_log(scan_id, "subdomain_discovery", f"subdomain_permutation_resolve skipped: wildcard detected (ratio {ratio:.1f}x)", "warning")
else:
# === Step 3.2: 采样通过,执行完整变异 ===
logger.info("采样检测通过,执行完整变异...")
permuted_output = str(result_dir / f"subs_permuted_{timestamp}.txt")
permuted_result = _run_single_tool(
tool_name='subdomain_permutation_resolve',
tool_config=permutation_tool_config,
command_params={
'input_file': current_result,
'output_file': permuted_output,
},
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
if permuted_result:
@@ -611,15 +621,21 @@ def subdomain_discovery_flow(
)
successful_tool_names.append('subdomain_permutation_resolve')
executed_tasks.append('permutation')
logger.info("✓ subdomain_permutation_resolve 执行完成")
user_log(scan_id, "subdomain_discovery", "subdomain_permutation_resolve completed")
else:
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': '执行失败'})
logger.warning("⚠️ subdomain_permutation_resolve 执行失败")
user_log(scan_id, "subdomain_discovery", "subdomain_permutation_resolve failed: execution failed", "error")
except subprocess.TimeoutExpired:
logger.warning(f"采样检测超时 ({SAMPLE_TIMEOUT}秒),跳过变异")
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': '采样检测超时'})
logger.warning(f"采样检测超时 ({SAMPLE_TIMEOUT}秒),跳过变异")
user_log(scan_id, "subdomain_discovery", "subdomain_permutation_resolve failed: sample detection timeout", "error")
except Exception as e:
logger.warning(f"采样检测失败: {e},跳过变异")
failed_tools.append({'tool': 'subdomain_permutation_resolve', 'reason': f'采样检测失败: {e}'})
logger.warning(f"采样检测失败: {e},跳过变异")
user_log(scan_id, "subdomain_discovery", f"subdomain_permutation_resolve failed: {str(e)}", "error")
# ==================== Stage 4: DNS 存活验证(可选)====================
# 无论是否启用 Stage 3只要 resolve.enabled 为 true 就会执行,对当前所有候选子域做统一 DNS 验证
@@ -628,6 +644,7 @@ def subdomain_discovery_flow(
logger.info("=" * 40)
logger.info("Stage 4: DNS 存活验证")
logger.info("=" * 40)
user_log(scan_id, "subdomain_discovery", "Stage 4: DNS resolve")
resolve_tool_config = resolve_config.get('subdomain_resolve', {})
@@ -651,30 +668,27 @@ def subdomain_discovery_flow(
**resolve_tool_config,
'timeout': timeout_value,
}
logger.info(
"subdomain_resolve 使用自动 timeout: %s 秒 (候选子域数=%s, 3秒/域名)",
timeout_value,
line_count_int,
)
alive_output = str(result_dir / f"subs_alive_{timestamp}.txt")
alive_result = _run_single_tool(
tool_name='subdomain_resolve',
tool_config=resolve_tool_config,
command_params={
'input_file': current_result,
'output_file': alive_output,
},
result_dir=result_dir
result_dir=result_dir,
scan_id=scan_id
)
if alive_result:
current_result = alive_result
successful_tool_names.append('subdomain_resolve')
executed_tasks.append('resolve')
logger.info("✓ subdomain_resolve 执行完成")
user_log(scan_id, "subdomain_discovery", "subdomain_resolve completed")
else:
failed_tools.append({'tool': 'subdomain_resolve', 'reason': '执行失败'})
logger.warning("⚠️ subdomain_resolve 执行失败")
user_log(scan_id, "subdomain_discovery", "subdomain_resolve failed: execution failed", "error")
# ==================== Final: 保存到数据库 ====================
logger.info("=" * 40)
@@ -695,7 +709,9 @@ def subdomain_discovery_flow(
processed_domains = save_result.get('processed_records', 0)
executed_tasks.append('save_domains')
# 记录 Flow 完成
logger.info("="*60 + "\n✓ 子域名发现扫描完成\n" + "="*60)
user_log(scan_id, "subdomain_discovery", f"subdomain_discovery completed: found {processed_domains} subdomains")
return {
'success': True,

View File

@@ -59,6 +59,8 @@ def domain_name_url_fetch_flow(
- IP 和 CIDR 类型会自动跳过waymore 等工具不支持)
- 工具会自动收集 *.target_name 的所有历史 URL无需遍历子域名
"""
from apps.scan.utils import user_log
try:
output_path = Path(output_dir)
output_path.mkdir(parents=True, exist_ok=True)
@@ -145,6 +147,9 @@ def domain_name_url_fetch_flow(
timeout,
)
# 记录工具开始执行日志
user_log(scan_id, "url_fetch", f"Running {tool_name}: {command}")
future = run_url_fetcher_task.submit(
tool_name=tool_name,
command=command,
@@ -163,22 +168,28 @@ def domain_name_url_fetch_flow(
if result and result.get("success"):
result_files.append(result["output_file"])
successful_tools.append(tool_name)
url_count = result.get("url_count", 0)
logger.info(
"✓ 工具 %s 执行成功 - 发现 URL: %d",
tool_name,
result.get("url_count", 0),
url_count,
)
user_log(scan_id, "url_fetch", f"{tool_name} completed: found {url_count} urls")
else:
reason = "未生成结果或无有效 URL"
failed_tools.append(
{
"tool": tool_name,
"reason": "未生成结果或无有效 URL",
"reason": reason,
}
)
logger.warning("⚠️ 工具 %s 未生成有效结果", tool_name)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
except Exception as e:
failed_tools.append({"tool": tool_name, "reason": str(e)})
reason = str(e)
failed_tools.append({"tool": tool_name, "reason": reason})
logger.warning("⚠️ 工具 %s 执行失败: %s", tool_name, e)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
logger.info(
"基于 domain_name 的 URL 获取完成 - 成功工具: %s, 失败工具: %s",

View File

@@ -25,6 +25,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import user_log
from .domain_name_url_fetch_flow import domain_name_url_fetch_flow
from .sites_url_fetch_flow import sites_url_fetch_flow
@@ -291,6 +292,8 @@ def url_fetch_flow(
"="*60
)
user_log(scan_id, "url_fetch", "Starting URL fetch")
# Step 1: 准备工作目录
logger.info("Step 1: 准备工作目录")
from apps.scan.utils import setup_scan_directory
@@ -403,7 +406,9 @@ def url_fetch_flow(
target_id=target_id
)
logger.info("="*60 + "\n✓ URL 获取扫描完成\n" + "="*60)
# 记录 Flow 完成
logger.info("✓ URL 获取完成 - 保存 endpoints: %d", saved_count)
user_log(scan_id, "url_fetch", f"url_fetch completed: found {saved_count} endpoints")
# 构建已执行的任务列表
executed_tasks = ['setup_directory', 'classify_tools']

View File

@@ -116,7 +116,8 @@ def sites_url_fetch_flow(
tools=enabled_tools,
input_file=sites_file,
input_type="sites_file",
output_dir=output_path
output_dir=output_path,
scan_id=scan_id
)
logger.info(

View File

@@ -152,7 +152,8 @@ def run_tools_parallel(
tools: dict,
input_file: str,
input_type: str,
output_dir: Path
output_dir: Path,
scan_id: int
) -> tuple[list, list, list]:
"""
并行执行工具列表
@@ -162,11 +163,13 @@ def run_tools_parallel(
input_file: 输入文件路径
input_type: 输入类型
output_dir: 输出目录
scan_id: 扫描任务 ID用于记录日志
Returns:
tuple: (result_files, failed_tools, successful_tool_names)
"""
from apps.scan.tasks.url_fetch import run_url_fetcher_task
from apps.scan.utils import user_log
futures: dict[str, object] = {}
failed_tools: list[dict] = []
@@ -192,6 +195,9 @@ def run_tools_parallel(
exec_params["timeout"],
)
# 记录工具开始执行日志
user_log(scan_id, "url_fetch", f"Running {tool_name}: {exec_params['command']}")
# 提交并行任务
future = run_url_fetcher_task.submit(
tool_name=tool_name,
@@ -208,22 +214,28 @@ def run_tools_parallel(
result = future.result()
if result and result['success']:
result_files.append(result['output_file'])
url_count = result['url_count']
logger.info(
"✓ 工具 %s 执行成功 - 发现 URL: %d",
tool_name, result['url_count']
tool_name, url_count
)
user_log(scan_id, "url_fetch", f"{tool_name} completed: found {url_count} urls")
else:
reason = '未生成结果或无有效URL'
failed_tools.append({
'tool': tool_name,
'reason': '未生成结果或无有效URL'
'reason': reason
})
logger.warning("⚠️ 工具 %s 未生成有效结果", tool_name)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
except Exception as e:
reason = str(e)
failed_tools.append({
'tool': tool_name,
'reason': str(e)
'reason': reason
})
logger.warning("⚠️ 工具 %s 执行失败: %s", tool_name, e)
user_log(scan_id, "url_fetch", f"{tool_name} failed: {reason}", "error")
# 计算成功的工具列表
failed_tool_names = [f['tool'] for f in failed_tools]

View File

@@ -12,7 +12,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_completed,
on_scan_flow_failed,
)
from apps.scan.utils import build_scan_command, ensure_nuclei_templates_local
from apps.scan.utils import build_scan_command, ensure_nuclei_templates_local, user_log
from apps.scan.tasks.vuln_scan import (
export_endpoints_task,
run_vuln_tool_task,
@@ -141,6 +141,7 @@ def endpoints_vuln_scan_flow(
# Dalfox XSS 使用流式任务,一边解析一边保存漏洞结果
if tool_name == "dalfox_xss":
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_and_stream_save_dalfox_vulns_task.submit(
cmd=command,
tool_name=tool_name,
@@ -163,6 +164,7 @@ def endpoints_vuln_scan_flow(
elif tool_name == "nuclei":
# Nuclei 使用流式任务
logger.info("开始执行漏洞扫描工具 %s(流式保存漏洞结果,已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_and_stream_save_nuclei_vulns_task.submit(
cmd=command,
tool_name=tool_name,
@@ -185,6 +187,7 @@ def endpoints_vuln_scan_flow(
else:
# 其他工具仍使用非流式执行逻辑
logger.info("开始执行漏洞扫描工具 %s(已提交任务)", tool_name)
user_log(scan_id, "vuln_scan", f"Running {tool_name}: {command}")
future = run_vuln_tool_task.submit(
tool_name=tool_name,
command=command,
@@ -203,24 +206,34 @@ def endpoints_vuln_scan_flow(
# 统一收集所有工具的执行结果
for tool_name, meta in tool_futures.items():
future = meta["future"]
result = future.result()
try:
result = future.result()
if meta["mode"] == "streaming":
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"processed_records": result.get("processed_records"),
"created_vulns": result.get("created_vulns"),
"command_log_file": meta["log_file"],
}
else:
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"duration": result.get("duration"),
"returncode": result.get("returncode"),
"command_log_file": result.get("command_log_file"),
}
if meta["mode"] == "streaming":
created_vulns = result.get("created_vulns", 0)
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"processed_records": result.get("processed_records"),
"created_vulns": created_vulns,
"command_log_file": meta["log_file"],
}
logger.info("✓ 工具 %s 执行完成 - 漏洞: %d", tool_name, created_vulns)
user_log(scan_id, "vuln_scan", f"{tool_name} completed: found {created_vulns} vulnerabilities")
else:
tool_results[tool_name] = {
"command": meta["command"],
"timeout": meta["timeout"],
"duration": result.get("duration"),
"returncode": result.get("returncode"),
"command_log_file": result.get("command_log_file"),
}
logger.info("✓ 工具 %s 执行完成 - returncode=%s", tool_name, result.get("returncode"))
user_log(scan_id, "vuln_scan", f"{tool_name} completed")
except Exception as e:
reason = str(e)
logger.error("工具 %s 执行失败: %s", tool_name, e, exc_info=True)
user_log(scan_id, "vuln_scan", f"{tool_name} failed: {reason}", "error")
return {
"success": True,

View File

@@ -11,6 +11,7 @@ from apps.scan.handlers.scan_flow_handlers import (
on_scan_flow_failed,
)
from apps.scan.configs.command_templates import get_command_template
from apps.scan.utils import user_log
from .endpoints_vuln_scan_flow import endpoints_vuln_scan_flow
@@ -72,6 +73,9 @@ def vuln_scan_flow(
if not enabled_tools:
raise ValueError("enabled_tools 不能为空")
logger.info("开始漏洞扫描 - Scan ID: %s, Target: %s", scan_id, target_name)
user_log(scan_id, "vuln_scan", "Starting vulnerability scan")
# Step 1: 分类工具
endpoints_tools, other_tools = _classify_vuln_tools(enabled_tools)
@@ -99,6 +103,14 @@ def vuln_scan_flow(
enabled_tools=endpoints_tools,
)
# 记录 Flow 完成
total_vulns = sum(
r.get("created_vulns", 0)
for r in endpoint_result.get("tool_results", {}).values()
)
logger.info("✓ 漏洞扫描完成 - 新增漏洞: %d", total_vulns)
user_log(scan_id, "vuln_scan", f"vuln_scan completed: found {total_vulns} vulnerabilities")
# 目前只有一个子 Flow直接返回其结果
return endpoint_result

View File

@@ -14,6 +14,7 @@ from prefect import Flow
from prefect.client.schemas import FlowRun, State
from apps.scan.utils.performance import FlowPerformanceTracker
from apps.scan.utils import user_log
logger = logging.getLogger(__name__)
@@ -136,6 +137,7 @@ def on_scan_flow_failed(flow: Flow, flow_run: FlowRun, state: State) -> None:
- 更新阶段进度为 failed
- 发送扫描失败通知
- 记录性能指标(含错误信息)
- 写入 ScanLog 供前端显示
Args:
flow: Prefect Flow 对象
@@ -152,6 +154,11 @@ def on_scan_flow_failed(flow: Flow, flow_run: FlowRun, state: State) -> None:
# 提取错误信息
error_message = str(state.message) if state.message else "未知错误"
# 写入 ScanLog 供前端显示
stage = _get_stage_from_flow_name(flow.name)
if scan_id and stage:
user_log(scan_id, stage, f"Failed: {error_message}", "error")
# 记录性能指标(失败情况)
tracker = _flow_trackers.pop(str(flow_run.id), None)
if tracker:

View File

@@ -57,7 +57,7 @@ class Migration(migrations.Migration):
('id', models.AutoField(primary_key=True, serialize=False)),
('engine_ids', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), default=list, help_text='引擎 ID 列表', size=None)),
('engine_names', models.JSONField(default=list, help_text='引擎名称列表,如 ["引擎A", "引擎B"]')),
('merged_configuration', models.TextField(default='', help_text='合并后的 YAML 配置')),
('yaml_configuration', models.TextField(default='', help_text='YAML 格式的扫描配置')),
('created_at', models.DateTimeField(auto_now_add=True, help_text='任务创建时间')),
('stopped_at', models.DateTimeField(blank=True, help_text='扫描结束时间', null=True)),
('status', models.CharField(choices=[('cancelled', '已取消'), ('completed', '已完成'), ('failed', '失败'), ('initiated', '初始化'), ('running', '运行中')], db_index=True, default='initiated', help_text='任务状态', max_length=20)),
@@ -97,7 +97,7 @@ class Migration(migrations.Migration):
('name', models.CharField(help_text='任务名称', max_length=200)),
('engine_ids', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), default=list, help_text='引擎 ID 列表', size=None)),
('engine_names', models.JSONField(default=list, help_text='引擎名称列表,如 ["引擎A", "引擎B"]')),
('merged_configuration', models.TextField(default='', help_text='合并后的 YAML 配置')),
('yaml_configuration', models.TextField(default='', help_text='YAML 格式的扫描配置')),
('cron_expression', models.CharField(default='0 2 * * *', help_text='Cron 表达式,格式:分 时 日 月 周', max_length=100)),
('is_enabled', models.BooleanField(db_index=True, default=True, help_text='是否启用')),
('run_count', models.IntegerField(default=0, help_text='已执行次数')),
@@ -116,4 +116,21 @@ class Migration(migrations.Migration):
'indexes': [models.Index(fields=['-created_at'], name='scheduled_s_created_9b9c2e_idx'), models.Index(fields=['is_enabled', '-created_at'], name='scheduled_s_is_enab_23d660_idx'), models.Index(fields=['name'], name='scheduled_s_name_bf332d_idx')],
},
),
migrations.CreateModel(
name='ScanLog',
fields=[
('id', models.BigAutoField(primary_key=True, serialize=False)),
('level', models.CharField(choices=[('info', 'Info'), ('warning', 'Warning'), ('error', 'Error')], default='info', help_text='日志级别', max_length=10)),
('content', models.TextField(help_text='日志内容')),
('created_at', models.DateTimeField(auto_now_add=True, db_index=True, help_text='创建时间')),
('scan', models.ForeignKey(db_index=True, help_text='关联的扫描任务', on_delete=django.db.models.deletion.CASCADE, related_name='logs', to='scan.scan')),
],
options={
'verbose_name': '扫描日志',
'verbose_name_plural': '扫描日志',
'db_table': 'scan_log',
'ordering': ['created_at'],
'indexes': [models.Index(fields=['scan', 'created_at'], name='scan_log_scan_id_e8c8f5_idx')],
},
),
]

View File

@@ -30,9 +30,9 @@ class Scan(models.Model):
default=list,
help_text='引擎名称列表,如 ["引擎A", "引擎B"]'
)
merged_configuration = models.TextField(
yaml_configuration = models.TextField(
default='',
help_text='合并后的 YAML 配置'
help_text='YAML 格式的扫描配置'
)
created_at = models.DateTimeField(auto_now_add=True, help_text='任务创建时间')
@@ -106,6 +106,55 @@ class Scan(models.Model):
return f"Scan #{self.id} - {self.target.name}"
class ScanLog(models.Model):
"""扫描日志模型
存储扫描过程中的关键处理日志,用于前端实时查看扫描进度。
日志类型:
- 阶段开始/完成/失败
- 处理进度(如 "Progress: 50/120"
- 发现结果统计(如 "Found 120 subdomains"
- 错误信息
日志格式:[stage_name] message
"""
class Level(models.TextChoices):
INFO = 'info', 'Info'
WARNING = 'warning', 'Warning'
ERROR = 'error', 'Error'
id = models.BigAutoField(primary_key=True)
scan = models.ForeignKey(
'Scan',
on_delete=models.CASCADE,
related_name='logs',
db_index=True,
help_text='关联的扫描任务'
)
level = models.CharField(
max_length=10,
choices=Level.choices,
default=Level.INFO,
help_text='日志级别'
)
content = models.TextField(help_text='日志内容')
created_at = models.DateTimeField(auto_now_add=True, db_index=True, help_text='创建时间')
class Meta:
db_table = 'scan_log'
verbose_name = '扫描日志'
verbose_name_plural = '扫描日志'
ordering = ['created_at']
indexes = [
models.Index(fields=['scan', 'created_at']),
]
def __str__(self):
return f"[{self.level}] {self.content[:50]}"
class ScheduledScan(models.Model):
"""
定时扫描任务模型
@@ -136,9 +185,9 @@ class ScheduledScan(models.Model):
default=list,
help_text='引擎名称列表,如 ["引擎A", "引擎B"]'
)
merged_configuration = models.TextField(
yaml_configuration = models.TextField(
default='',
help_text='合并后的 YAML 配置'
help_text='YAML 格式的扫描配置'
)
# 关联的组织(组织扫描模式:执行时动态获取组织下所有目标)

View File

@@ -104,7 +104,7 @@ class DjangoScanRepository:
target: Target,
engine_ids: List[int],
engine_names: List[str],
merged_configuration: str,
yaml_configuration: str,
results_dir: str,
status: ScanStatus = ScanStatus.INITIATED
) -> Scan:
@@ -115,7 +115,7 @@ class DjangoScanRepository:
target: 扫描目标
engine_ids: 引擎 ID 列表
engine_names: 引擎名称列表
merged_configuration: 合并后的 YAML 配置
yaml_configuration: YAML 格式的扫描配置
results_dir: 结果目录
status: 初始状态
@@ -126,7 +126,7 @@ class DjangoScanRepository:
target=target,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration,
yaml_configuration=yaml_configuration,
results_dir=results_dir,
status=status,
container_ids=[]

View File

@@ -31,7 +31,7 @@ class ScheduledScanDTO:
name: str = ''
engine_ids: List[int] = None # 多引擎支持
engine_names: List[str] = None # 引擎名称列表
merged_configuration: str = '' # 合并后的配置
yaml_configuration: str = '' # YAML 格式的扫描配置
organization_id: Optional[int] = None # 组织扫描模式
target_id: Optional[int] = None # 目标扫描模式
cron_expression: Optional[str] = None
@@ -114,7 +114,7 @@ class DjangoScheduledScanRepository:
name=dto.name,
engine_ids=dto.engine_ids,
engine_names=dto.engine_names,
merged_configuration=dto.merged_configuration,
yaml_configuration=dto.yaml_configuration,
organization_id=dto.organization_id, # 组织扫描模式
target_id=dto.target_id if not dto.organization_id else None, # 目标扫描模式
cron_expression=dto.cron_expression,
@@ -147,8 +147,8 @@ class DjangoScheduledScanRepository:
scheduled_scan.engine_ids = dto.engine_ids
if dto.engine_names is not None:
scheduled_scan.engine_names = dto.engine_names
if dto.merged_configuration is not None:
scheduled_scan.merged_configuration = dto.merged_configuration
if dto.yaml_configuration is not None:
scheduled_scan.yaml_configuration = dto.yaml_configuration
if dto.cron_expression is not None:
scheduled_scan.cron_expression = dto.cron_expression
if dto.is_enabled is not None:

View File

@@ -1,7 +1,79 @@
from rest_framework import serializers
from django.db.models import Count
import yaml
from .models import Scan, ScheduledScan
from .models import Scan, ScheduledScan, ScanLog
# ==================== 扫描日志序列化器 ====================
class ScanLogSerializer(serializers.ModelSerializer):
"""扫描日志序列化器"""
class Meta:
model = ScanLog
fields = ['id', 'level', 'content', 'created_at']
# ==================== 通用验证 Mixin ====================
class DuplicateKeyLoader(yaml.SafeLoader):
"""自定义 YAML Loader检测重复 key"""
pass
def _check_duplicate_keys(loader, node, deep=False):
"""检测 YAML mapping 中的重复 key"""
mapping = {}
for key_node, value_node in node.value:
key = loader.construct_object(key_node, deep=deep)
if key in mapping:
raise yaml.constructor.ConstructorError(
"while constructing a mapping", node.start_mark,
f"发现重复的配置项 '{key}',后面的配置会覆盖前面的配置,请删除重复项", key_node.start_mark
)
mapping[key] = loader.construct_object(value_node, deep=deep)
return mapping
DuplicateKeyLoader.add_constructor(
yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
_check_duplicate_keys
)
class ScanConfigValidationMixin:
"""扫描配置验证 Mixin提供通用的验证方法"""
def validate_configuration(self, value):
"""验证 YAML 配置格式,包括检测重复 key"""
import yaml
if not value or not value.strip():
raise serializers.ValidationError("configuration 不能为空")
try:
# 使用自定义 Loader 检测重复 key
yaml.load(value, Loader=DuplicateKeyLoader)
except yaml.YAMLError as e:
raise serializers.ValidationError(f"无效的 YAML 格式: {str(e)}")
return value
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if not value:
raise serializers.ValidationError("engine_ids 不能为空,请至少选择一个扫描引擎")
return value
def validate_engine_names(self, value):
"""验证引擎名称列表"""
if not value:
raise serializers.ValidationError("engine_names 不能为空")
return value
# ==================== 扫描任务序列化器 ====================
class ScanSerializer(serializers.ModelSerializer):
@@ -82,12 +154,12 @@ class ScanHistorySerializer(serializers.ModelSerializer):
return summary
class QuickScanSerializer(serializers.Serializer):
class QuickScanSerializer(ScanConfigValidationMixin, serializers.Serializer):
"""
快速扫描序列化器
功能:
- 接收目标列表和引擎配置
- 接收目标列表和 YAML 配置
- 自动创建/获取目标
- 立即发起扫描
"""
@@ -101,11 +173,24 @@ class QuickScanSerializer(serializers.Serializer):
help_text='目标列表,每个目标包含 name 字段'
)
# 扫描引擎 ID 列表
# YAML 配置(必填)
configuration = serializers.CharField(
required=True,
help_text='YAML 格式的扫描配置(必填)'
)
# 扫描引擎 ID 列表(必填,用于记录和显示)
engine_ids = serializers.ListField(
child=serializers.IntegerField(),
required=True,
help_text='使用的扫描引擎 ID 列表 (必填)'
help_text='使用的扫描引擎 ID 列表必填'
)
# 引擎名称列表(必填,用于记录和显示)
engine_names = serializers.ListField(
child=serializers.CharField(),
required=True,
help_text='引擎名称列表(必填)'
)
def validate_targets(self, value):
@@ -127,12 +212,6 @@ class QuickScanSerializer(serializers.Serializer):
raise serializers.ValidationError(f"{idx + 1} 个目标的 name 不能为空")
return value
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if not value:
raise serializers.ValidationError("engine_ids 不能为空")
return value
# ==================== 定时扫描序列化器 ====================
@@ -171,7 +250,7 @@ class ScheduledScanSerializer(serializers.ModelSerializer):
return 'organization' if obj.organization_id else 'target'
class CreateScheduledScanSerializer(serializers.Serializer):
class CreateScheduledScanSerializer(ScanConfigValidationMixin, serializers.Serializer):
"""创建定时扫描任务序列化器
扫描模式(二选一):
@@ -180,9 +259,25 @@ class CreateScheduledScanSerializer(serializers.Serializer):
"""
name = serializers.CharField(max_length=200, help_text='任务名称')
# YAML 配置(必填)
configuration = serializers.CharField(
required=True,
help_text='YAML 格式的扫描配置(必填)'
)
# 扫描引擎 ID 列表(必填,用于记录和显示)
engine_ids = serializers.ListField(
child=serializers.IntegerField(),
help_text='扫描引擎 ID 列表'
required=True,
help_text='扫描引擎 ID 列表(必填)'
)
# 引擎名称列表(必填,用于记录和显示)
engine_names = serializers.ListField(
child=serializers.CharField(),
required=True,
help_text='引擎名称列表(必填)'
)
# 组织扫描模式
@@ -206,11 +301,61 @@ class CreateScheduledScanSerializer(serializers.Serializer):
)
is_enabled = serializers.BooleanField(default=True, help_text='是否立即启用')
def validate_engine_ids(self, value):
"""验证引擎 ID 列表"""
if not value:
raise serializers.ValidationError("engine_ids 不能为空")
return value
def validate(self, data):
"""验证 organization_id 和 target_id 互斥"""
organization_id = data.get('organization_id')
target_id = data.get('target_id')
if not organization_id and not target_id:
raise serializers.ValidationError('必须提供 organization_id 或 target_id 其中之一')
if organization_id and target_id:
raise serializers.ValidationError('organization_id 和 target_id 只能提供其中之一')
return data
class InitiateScanSerializer(ScanConfigValidationMixin, serializers.Serializer):
"""发起扫描任务序列化器
扫描模式(二选一):
- 组织扫描:提供 organization_id扫描组织下所有目标
- 目标扫描:提供 target_id扫描单个目标
"""
# YAML 配置(必填)
configuration = serializers.CharField(
required=True,
help_text='YAML 格式的扫描配置(必填)'
)
# 扫描引擎 ID 列表(必填)
engine_ids = serializers.ListField(
child=serializers.IntegerField(),
required=True,
help_text='扫描引擎 ID 列表(必填)'
)
# 引擎名称列表(必填)
engine_names = serializers.ListField(
child=serializers.CharField(),
required=True,
help_text='引擎名称列表(必填)'
)
# 组织扫描模式
organization_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='组织 ID组织扫描模式'
)
# 目标扫描模式
target_id = serializers.IntegerField(
required=False,
allow_null=True,
help_text='目标 ID目标扫描模式'
)
def validate(self, data):
"""验证 organization_id 和 target_id 互斥"""

View File

@@ -282,7 +282,7 @@ class ScanCreationService:
targets: List[Target],
engine_ids: List[int],
engine_names: List[str],
merged_configuration: str,
yaml_configuration: str,
scheduled_scan_name: str | None = None
) -> List[Scan]:
"""
@@ -292,7 +292,7 @@ class ScanCreationService:
targets: 目标列表
engine_ids: 引擎 ID 列表
engine_names: 引擎名称列表
merged_configuration: 合并后的 YAML 配置
yaml_configuration: YAML 格式的扫描配置
scheduled_scan_name: 定时扫描任务名称(可选,用于通知显示)
Returns:
@@ -312,7 +312,7 @@ class ScanCreationService:
target=target,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration,
yaml_configuration=yaml_configuration,
results_dir=scan_workspace_dir,
status=ScanStatus.INITIATED,
container_ids=[],

View File

@@ -117,12 +117,12 @@ class ScanService:
targets: List[Target],
engine_ids: List[int],
engine_names: List[str],
merged_configuration: str,
yaml_configuration: str,
scheduled_scan_name: str | None = None
) -> List[Scan]:
"""批量创建扫描任务(委托给 ScanCreationService"""
return self.creation_service.create_scans(
targets, engine_ids, engine_names, merged_configuration, scheduled_scan_name
targets, engine_ids, engine_names, yaml_configuration, scheduled_scan_name
)
# ==================== 状态管理方法(委托给 ScanStateService ====================

View File

@@ -54,7 +54,7 @@ class ScheduledScanService:
def create(self, dto: ScheduledScanDTO) -> ScheduledScan:
"""
创建定时扫描任务
创建定时扫描任务(使用引擎 ID 合并配置)
流程:
1. 验证参数
@@ -88,7 +88,7 @@ class ScheduledScanService:
# 设置 DTO 的合并配置和引擎名称
dto.engine_names = engine_names
dto.merged_configuration = merged_configuration
dto.yaml_configuration = merged_configuration
# 3. 创建数据库记录
scheduled_scan = self.repo.create(dto)
@@ -107,12 +107,49 @@ class ScheduledScanService:
return scheduled_scan
def _validate_create_dto(self, dto: ScheduledScanDTO) -> None:
"""验证创建 DTO"""
from apps.targets.repositories import DjangoOrganizationRepository
def create_with_configuration(self, dto: ScheduledScanDTO) -> ScheduledScan:
"""
创建定时扫描任务(直接使用前端传递的配置)
if not dto.name:
raise ValidationError('任务名称不能为空')
流程:
1. 验证参数
2. 直接使用 dto.yaml_configuration
3. 创建数据库记录
4. 计算并设置 next_run_time
Args:
dto: 定时扫描 DTO必须包含 yaml_configuration
Returns:
创建的 ScheduledScan 对象
Raises:
ValidationError: 参数验证失败
"""
# 1. 验证参数
self._validate_create_dto_with_configuration(dto)
# 2. 创建数据库记录(直接使用 dto 中的配置)
scheduled_scan = self.repo.create(dto)
# 3. 如果有 cron 表达式且已启用,计算下次执行时间
if scheduled_scan.cron_expression and scheduled_scan.is_enabled:
next_run_time = self._calculate_next_run_time(scheduled_scan)
if next_run_time:
self.repo.update_next_run_time(scheduled_scan.id, next_run_time)
scheduled_scan.next_run_time = next_run_time
logger.info(
"创建定时扫描任务 - ID: %s, 名称: %s, 下次执行: %s",
scheduled_scan.id, scheduled_scan.name, scheduled_scan.next_run_time
)
return scheduled_scan
def _validate_create_dto(self, dto: ScheduledScanDTO) -> None:
"""验证创建 DTO使用引擎 ID"""
# 基础验证
self._validate_base_dto(dto)
if not dto.engine_ids:
raise ValidationError('必须选择扫描引擎')
@@ -121,6 +158,21 @@ class ScheduledScanService:
for engine_id in dto.engine_ids:
if not self.engine_repo.get_by_id(engine_id):
raise ValidationError(f'扫描引擎 ID {engine_id} 不存在')
def _validate_create_dto_with_configuration(self, dto: ScheduledScanDTO) -> None:
"""验证创建 DTO使用前端传递的配置"""
# 基础验证
self._validate_base_dto(dto)
if not dto.yaml_configuration:
raise ValidationError('配置不能为空')
def _validate_base_dto(self, dto: ScheduledScanDTO) -> None:
"""验证 DTO 的基础字段(公共逻辑)"""
from apps.targets.repositories import DjangoOrganizationRepository
if not dto.name:
raise ValidationError('任务名称不能为空')
# 验证扫描模式organization_id 和 target_id 互斥)
if not dto.organization_id and not dto.target_id:
@@ -178,7 +230,7 @@ class ScheduledScanService:
merged_configuration = merge_engine_configs(engines)
dto.engine_names = engine_names
dto.merged_configuration = merged_configuration
dto.yaml_configuration = merged_configuration
# 更新数据库记录
scheduled_scan = self.repo.update(scheduled_scan_id, dto)
@@ -329,7 +381,7 @@ class ScheduledScanService:
立即触发扫描(支持组织扫描和目标扫描两种模式)
复用 ScanService 的逻辑,与 API 调用保持一致。
使用存储的 merged_configuration 而不是重新合并。
使用存储的 yaml_configuration 而不是重新合并。
"""
from apps.scan.services.scan_service import ScanService
@@ -347,7 +399,7 @@ class ScheduledScanService:
targets=targets,
engine_ids=scheduled_scan.engine_ids,
engine_names=scheduled_scan.engine_names,
merged_configuration=scheduled_scan.merged_configuration,
yaml_configuration=scheduled_scan.yaml_configuration,
scheduled_scan_name=scheduled_scan.name
)

View File

@@ -1,6 +1,6 @@
from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import ScanViewSet, ScheduledScanViewSet
from .views import ScanViewSet, ScheduledScanViewSet, ScanLogListView
from .notifications.views import notification_callback
from apps.asset.views import (
SubdomainSnapshotViewSet, WebsiteSnapshotViewSet, DirectorySnapshotViewSet,
@@ -31,6 +31,8 @@ urlpatterns = [
path('', include(router.urls)),
# Worker 回调 API
path('callbacks/notification/', notification_callback, name='notification-callback'),
# 扫描日志 API
path('scans/<int:scan_id>/logs/', ScanLogListView.as_view(), name='scan-logs-list'),
# 嵌套路由:/api/scans/{scan_pk}/xxx/
path('scans/<int:scan_pk>/subdomains/', scan_subdomains_list, name='scan-subdomains-list'),
path('scans/<int:scan_pk>/subdomains/export/', scan_subdomains_export, name='scan-subdomains-export'),

View File

@@ -11,6 +11,7 @@ from .wordlist_helpers import ensure_wordlist_local
from .nuclei_helpers import ensure_nuclei_templates_local
from .performance import FlowPerformanceTracker, CommandPerformanceTracker
from .workspace_utils import setup_scan_workspace, setup_scan_directory
from .user_logger import user_log
from . import config_parser
__all__ = [
@@ -31,6 +32,8 @@ __all__ = [
# 性能监控
'FlowPerformanceTracker', # Flow 性能追踪器(含系统资源采样)
'CommandPerformanceTracker', # 命令性能追踪器
# 扫描日志
'user_log', # 用户可见扫描日志记录
# 配置解析
'config_parser',
]

View File

@@ -0,0 +1,56 @@
"""
扫描日志记录器
提供统一的日志记录接口,用于在 Flow 中记录用户可见的扫描进度日志。
特性:
- 简单的函数式 API
- 只写入数据库ScanLog 表),不写 Python logging
- 错误容忍(数据库失败不影响扫描执行)
职责分离:
- user_log: 用户可见日志(写数据库,前端展示)
- logger: 开发者日志(写日志文件/控制台,调试用)
使用示例:
from apps.scan.utils import user_log
# 用户日志(写数据库)
user_log(scan_id, "port_scan", "Starting port scan")
user_log(scan_id, "port_scan", "naabu completed: found 120 ports")
# 开发者日志(写日志文件)
logger.info("✓ 工具 %s 执行完成 - 记录数: %d", tool_name, count)
"""
import logging
from django.db import DatabaseError
logger = logging.getLogger(__name__)
def user_log(scan_id: int, stage: str, message: str, level: str = "info"):
"""
记录用户可见的扫描日志(只写数据库)
Args:
scan_id: 扫描任务 ID
stage: 阶段名称,如 "port_scan", "site_scan"
message: 日志消息
level: 日志级别,默认 "info",可选 "warning", "error"
数据库 content 格式: "[{stage}] {message}"
"""
formatted = f"[{stage}] {message}"
try:
from apps.scan.models import ScanLog
ScanLog.objects.create(
scan_id=scan_id,
level=level,
content=formatted
)
except DatabaseError as e:
logger.error("ScanLog write failed - scan_id=%s, error=%s", scan_id, e)
except Exception as e:
logger.error("ScanLog write unexpected error - scan_id=%s, error=%s", scan_id, e)

View File

@@ -2,8 +2,10 @@
from .scan_views import ScanViewSet
from .scheduled_scan_views import ScheduledScanViewSet
from .scan_log_views import ScanLogListView
__all__ = [
'ScanViewSet',
'ScheduledScanViewSet',
'ScanLogListView',
]

View File

@@ -0,0 +1,56 @@
"""
扫描日志 API
提供扫描日志查询接口,支持游标分页用于增量轮询。
"""
from rest_framework.views import APIView
from rest_framework.response import Response
from apps.scan.models import ScanLog
from apps.scan.serializers import ScanLogSerializer
class ScanLogListView(APIView):
"""
GET /scans/{scan_id}/logs/
游标分页 API用于增量查询日志
查询参数:
- afterId: 只返回此 ID 之后的日志(用于增量轮询,避免时间戳重复导致的重复日志)
- limit: 返回数量限制(默认 200最大 1000
返回:
- results: 日志列表
- hasMore: 是否还有更多日志
"""
def get(self, request, scan_id: int):
# 参数解析
after_id = request.query_params.get('afterId')
try:
limit = min(int(request.query_params.get('limit', 200)), 1000)
except (ValueError, TypeError):
limit = 200
# 查询日志(按 ID 排序ID 是自增的,保证顺序一致)
queryset = ScanLog.objects.filter(scan_id=scan_id).order_by('id')
# 游标过滤(使用 ID 而非时间戳,避免同一时间戳多条日志导致重复)
if after_id:
try:
queryset = queryset.filter(id__gt=int(after_id))
except (ValueError, TypeError):
pass
# 限制返回数量(多取一条用于判断 hasMore
logs = list(queryset[:limit + 1])
has_more = len(logs) > limit
if has_more:
logs = logs[:limit]
return Response({
'results': ScanLogSerializer(logs, many=True).data,
'hasMore': has_more,
})

View File

@@ -16,7 +16,7 @@ logger = logging.getLogger(__name__)
from ..models import Scan, ScheduledScan
from ..serializers import (
ScanSerializer, ScanHistorySerializer, QuickScanSerializer,
ScheduledScanSerializer, CreateScheduledScanSerializer,
InitiateScanSerializer, ScheduledScanSerializer, CreateScheduledScanSerializer,
UpdateScheduledScanSerializer, ToggleScheduledScanSerializer
)
from ..services.scan_service import ScanService
@@ -111,7 +111,7 @@ class ScanViewSet(viewsets.ModelViewSet):
快速扫描接口
功能:
1. 接收目标列表和引擎配置
1. 接收目标列表和 YAML 配置
2. 自动解析输入(支持 URL、域名、IP、CIDR
3. 批量创建 Target、Website、Endpoint 资产
4. 立即发起批量扫描
@@ -119,7 +119,9 @@ class ScanViewSet(viewsets.ModelViewSet):
请求参数:
{
"targets": [{"name": "example.com"}, {"name": "https://example.com/api"}],
"engine_ids": [1, 2]
"configuration": "subdomain_discovery:\n enabled: true\n ...",
"engine_ids": [1, 2], // 可选,用于记录
"engine_names": ["引擎A", "引擎B"] // 可选,用于记录
}
支持的输入格式:
@@ -134,7 +136,9 @@ class ScanViewSet(viewsets.ModelViewSet):
serializer.is_valid(raise_exception=True)
targets_data = serializer.validated_data['targets']
engine_ids = serializer.validated_data.get('engine_ids')
configuration = serializer.validated_data['configuration']
engine_ids = serializer.validated_data.get('engine_ids', [])
engine_names = serializer.validated_data.get('engine_names', [])
try:
# 提取输入字符串列表
@@ -154,19 +158,13 @@ class ScanViewSet(viewsets.ModelViewSet):
status_code=status.HTTP_400_BAD_REQUEST
)
# 2. 准备多引擎扫描
# 2. 直接使用前端传递的配置创建扫描
scan_service = ScanService()
_, merged_configuration, engine_names, engine_ids = scan_service.prepare_initiate_scan_multi_engine(
target_id=targets[0].id, # 使用第一个目标来验证引擎
engine_ids=engine_ids
)
# 3. 批量发起扫描
created_scans = scan_service.create_scans(
targets=targets,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration
yaml_configuration=configuration
)
# 检查是否成功创建扫描任务
@@ -195,17 +193,6 @@ class ScanViewSet(viewsets.ModelViewSet):
},
status_code=status.HTTP_201_CREATED
)
except ConfigConflictError as e:
return error_response(
code='CONFIG_CONFLICT',
message=str(e),
details=[
{'key': k, 'engines': [e1, e2]}
for k, e1, e2 in e.conflicts
],
status_code=status.HTTP_400_BAD_REQUEST
)
except ValidationError as e:
return error_response(
@@ -228,48 +215,53 @@ class ScanViewSet(viewsets.ModelViewSet):
请求参数:
- organization_id: 组织ID (int, 可选)
- target_id: 目标ID (int, 可选)
- configuration: YAML 配置字符串 (str, 必填)
- engine_ids: 扫描引擎ID列表 (list[int], 必填)
- engine_names: 引擎名称列表 (list[str], 必填)
注意: organization_id 和 target_id 二选一
返回:
- 扫描任务详情(单个或多个)
"""
# 获取请求数据
organization_id = request.data.get('organization_id')
target_id = request.data.get('target_id')
engine_ids = request.data.get('engine_ids')
# 使用 serializer 验证请求数据
serializer = InitiateScanSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
# 验证 engine_ids
if not engine_ids:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='缺少必填参数: engine_ids',
status_code=status.HTTP_400_BAD_REQUEST
)
if not isinstance(engine_ids, list):
return error_response(
code=ErrorCodes.VALIDATION_ERROR,
message='engine_ids 必须是数组',
status_code=status.HTTP_400_BAD_REQUEST
)
# 获取验证后的数据
organization_id = serializer.validated_data.get('organization_id')
target_id = serializer.validated_data.get('target_id')
configuration = serializer.validated_data['configuration']
engine_ids = serializer.validated_data['engine_ids']
engine_names = serializer.validated_data['engine_names']
try:
# 步骤1准备多引擎扫描所需的数据
# 获取目标列表
scan_service = ScanService()
targets, merged_configuration, engine_names, engine_ids = scan_service.prepare_initiate_scan_multi_engine(
organization_id=organization_id,
target_id=target_id,
engine_ids=engine_ids
)
# 步骤2批量创建扫描记录并分发扫描任务
if organization_id:
from apps.targets.repositories import DjangoOrganizationRepository
org_repo = DjangoOrganizationRepository()
organization = org_repo.get_by_id(organization_id)
if not organization:
raise ObjectDoesNotExist(f'Organization ID {organization_id} 不存在')
targets = org_repo.get_targets(organization_id)
if not targets:
raise ValidationError(f'组织 ID {organization_id} 下没有目标')
else:
from apps.targets.repositories import DjangoTargetRepository
target_repo = DjangoTargetRepository()
target = target_repo.get_by_id(target_id)
if not target:
raise ObjectDoesNotExist(f'Target ID {target_id} 不存在')
targets = [target]
# 直接使用前端传递的配置创建扫描
created_scans = scan_service.create_scans(
targets=targets,
engine_ids=engine_ids,
engine_names=engine_names,
merged_configuration=merged_configuration
yaml_configuration=configuration
)
# 检查是否成功创建扫描任务
@@ -290,17 +282,6 @@ class ScanViewSet(viewsets.ModelViewSet):
},
status_code=status.HTTP_201_CREATED
)
except ConfigConflictError as e:
return error_response(
code='CONFIG_CONFLICT',
message=str(e),
details=[
{'key': k, 'engines': [e1, e2]}
for k, e1, e2 in e.conflicts
],
status_code=status.HTTP_400_BAD_REQUEST
)
except ObjectDoesNotExist as e:
# 资源不存在错误(由 service 层抛出)

View File

@@ -68,30 +68,22 @@ class ScheduledScanViewSet(viewsets.ModelViewSet):
data = serializer.validated_data
dto = ScheduledScanDTO(
name=data['name'],
engine_ids=data['engine_ids'],
engine_ids=data.get('engine_ids', []),
engine_names=data.get('engine_names', []),
yaml_configuration=data['configuration'],
organization_id=data.get('organization_id'),
target_id=data.get('target_id'),
cron_expression=data.get('cron_expression', '0 2 * * *'),
is_enabled=data.get('is_enabled', True),
)
scheduled_scan = self.service.create(dto)
scheduled_scan = self.service.create_with_configuration(dto)
response_serializer = ScheduledScanSerializer(scheduled_scan)
return success_response(
data=response_serializer.data,
status_code=status.HTTP_201_CREATED
)
except ConfigConflictError as e:
return error_response(
code='CONFIG_CONFLICT',
message=str(e),
details=[
{'key': k, 'engines': [e1, e2]}
for k, e1, e2 in e.conflicts
],
status_code=status.HTTP_400_BAD_REQUEST
)
except ValidationError as e:
return error_response(
code=ErrorCodes.VALIDATION_ERROR,

View File

@@ -219,6 +219,8 @@ REST_FRAMEWORK = {
# 允许所有来源(前后端分离项目,安全性由认证系统保障)
CORS_ALLOW_ALL_ORIGINS = os.getenv('CORS_ALLOW_ALL_ORIGINS', 'True').lower() == 'true'
CORS_ALLOW_CREDENTIALS = True
# 暴露额外的响应头给前端Content-Disposition 用于文件下载获取文件名)
CORS_EXPOSE_HEADERS = ['Content-Disposition']
# ==================== CSRF 配置 ====================
CSRF_TRUSTED_ORIGINS = os.getenv('CSRF_TRUSTED_ORIGINS', 'http://localhost:3000,http://127.0.0.1:3000').split(',')

View File

@@ -636,7 +636,7 @@ class TestDataGenerator:
cur.execute("""
INSERT INTO scan (
target_id, engine_ids, engine_names, merged_configuration, status, worker_id, progress, current_stage,
target_id, engine_ids, engine_names, yaml_configuration, status, worker_id, progress, current_stage,
results_dir, error_message, container_ids, stage_progress,
cached_subdomains_count, cached_websites_count, cached_endpoints_count,
cached_ips_count, cached_directories_count, cached_vulns_total,
@@ -749,7 +749,7 @@ class TestDataGenerator:
cur.execute("""
INSERT INTO scheduled_scan (
name, engine_ids, engine_names, merged_configuration, organization_id, target_id, cron_expression, is_enabled,
name, engine_ids, engine_names, yaml_configuration, organization_id, target_id, cron_expression, is_enabled,
run_count, last_run_time, next_run_time, created_at, updated_at
) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, NOW() - INTERVAL '%s days', NOW())
ON CONFLICT DO NOTHING

View File

@@ -8,7 +8,7 @@ services:
build:
context: ./postgres
dockerfile: Dockerfile
image: ${DOCKER_USER:-yyhuni}/xingrin-postgres:15
image: ${DOCKER_USER:-yyhuni}/xingrin-postgres:${IMAGE_TAG:-dev}
restart: always
environment:
POSTGRES_DB: ${DB_NAME}

View File

@@ -14,7 +14,7 @@ services:
build:
context: ./postgres
dockerfile: Dockerfile
image: ${DOCKER_USER:-yyhuni}/xingrin-postgres:15
image: ${DOCKER_USER:-yyhuni}/xingrin-postgres:${IMAGE_TAG:?IMAGE_TAG is required}
restart: always
environment:
POSTGRES_DB: ${DB_NAME}

View File

@@ -38,6 +38,8 @@ http {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300s; # 5分钟支持大数据量导出
proxy_send_timeout 300s;
proxy_pass http://backend;
}

View File

@@ -1,6 +1,5 @@
# 第一阶段:使用 Go 官方镜像编译工具
# 锁定 digest 避免上游更新导致缓存失效
FROM golang:1.24@sha256:7e050c14ae9ca5ae56408a288336545b18632f51402ab0ec8e7be0e649a1fc42 AS go-builder
FROM golang:1.24 AS go-builder
ENV GOPROXY=https://goproxy.cn,direct
# Naabu 需要 CGO 和 libpcap
@@ -37,8 +36,7 @@ RUN CGO_ENABLED=0 go install -v github.com/owasp-amass/amass/v5/cmd/amass@main
RUN go install github.com/hahwul/dalfox/v2@latest
# 第二阶段:运行时镜像
# 锁定 digest 避免上游更新导致缓存失效
FROM ubuntu:24.04@sha256:4fdf0125919d24aec972544669dcd7d6a26a8ad7e6561c73d5549bd6db258ac2
FROM ubuntu:24.04
# 避免交互式提示
ENV DEBIAN_FRONTEND=noninteractive
@@ -104,7 +102,11 @@ RUN pip install uv --break-system-packages && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# 6. 复制后端代码
# 6. 设置 Prefect 配置目录(避免 home 目录不存在的警告)
ENV PREFECT_HOME=/app/.prefect
RUN mkdir -p /app/.prefect
# 7. 复制后端代码
COPY backend /app/backend
ENV PYTHONPATH=/app/backend

View File

@@ -0,0 +1,318 @@
"use client"
import React, { useMemo, useCallback } from "react"
import { Play, Server, Settings, Zap } from "lucide-react"
import { useTranslations } from "next-intl"
import { Badge } from "@/components/ui/badge"
import { Checkbox } from "@/components/ui/checkbox"
import { cn } from "@/lib/utils"
import { CAPABILITY_CONFIG, parseEngineCapabilities, mergeEngineConfigurations } from "@/lib/engine-config"
import type { ScanEngine } from "@/types/engine.types"
export interface EnginePreset {
id: string
label: string
description: string
icon: React.ComponentType<{ className?: string }>
engineIds: number[]
}
interface EnginePresetSelectorProps {
engines: ScanEngine[]
selectedEngineIds: number[]
selectedPresetId: string | null
onPresetChange: (presetId: string | null) => void
onEngineIdsChange: (engineIds: number[]) => void
onConfigurationChange: (config: string) => void
disabled?: boolean
className?: string
}
export function EnginePresetSelector({
engines,
selectedEngineIds,
selectedPresetId,
onPresetChange,
onEngineIdsChange,
onConfigurationChange,
disabled = false,
className,
}: EnginePresetSelectorProps) {
const t = useTranslations("scan.initiate")
const tStages = useTranslations("scan.progress.stages")
// Preset definitions with precise engine filtering
const enginePresets = useMemo(() => {
if (!engines?.length) return []
// Categorize engines by their capabilities
const fullScanEngines: number[] = []
const reconEngines: number[] = []
const vulnEngines: number[] = []
engines.forEach(e => {
const caps = parseEngineCapabilities(e.configuration || "")
const hasRecon = caps.includes("subdomain_discovery") || caps.includes("port_scan") || caps.includes("site_scan") || caps.includes("directory_scan") || caps.includes("url_fetch")
const hasVuln = caps.includes("vuln_scan")
if (hasRecon && hasVuln) {
// Full capability engine - only for full scan
fullScanEngines.push(e.id)
} else if (hasRecon && !hasVuln) {
// Recon only engine
reconEngines.push(e.id)
} else if (hasVuln && !hasRecon) {
// Vuln only engine
vulnEngines.push(e.id)
}
})
return [
{
id: "full",
label: t("presets.fullScan"),
description: t("presets.fullScanDesc"),
icon: Zap,
engineIds: fullScanEngines,
},
{
id: "recon",
label: t("presets.recon"),
description: t("presets.reconDesc"),
icon: Server,
engineIds: reconEngines,
},
{
id: "vuln",
label: t("presets.vulnScan"),
description: t("presets.vulnScanDesc"),
icon: Play,
engineIds: vulnEngines,
},
{
id: "custom",
label: t("presets.custom"),
description: t("presets.customDesc"),
icon: Settings,
engineIds: [],
},
]
}, [engines, t])
const selectedEngines = useMemo(() => {
if (!selectedEngineIds.length || !engines) return []
return engines.filter((e) => selectedEngineIds.includes(e.id))
}, [selectedEngineIds, engines])
const selectedCapabilities = useMemo(() => {
if (!selectedEngines.length) return []
const allCaps = new Set<string>()
selectedEngines.forEach((engine) => {
parseEngineCapabilities(engine.configuration || "").forEach((cap) => allCaps.add(cap))
})
return Array.from(allCaps)
}, [selectedEngines])
// Get currently selected preset details
const selectedPreset = useMemo(() => {
return enginePresets.find(p => p.id === selectedPresetId)
}, [enginePresets, selectedPresetId])
// Get engines for the selected preset
const presetEngines = useMemo(() => {
if (!selectedPreset || selectedPreset.id === "custom") return []
return engines?.filter(e => selectedPreset.engineIds.includes(e.id)) || []
}, [selectedPreset, engines])
// Update configuration when engines change
const updateConfigurationFromEngines = useCallback((engineIds: number[]) => {
if (!engines) return
const selectedEngs = engines.filter(e => engineIds.includes(e.id))
const mergedConfig = mergeEngineConfigurations(selectedEngs.map(e => e.configuration || ""))
onConfigurationChange(mergedConfig)
}, [engines, onConfigurationChange])
const handlePresetSelect = useCallback((preset: EnginePreset) => {
onPresetChange(preset.id)
if (preset.id !== "custom") {
onEngineIdsChange(preset.engineIds)
updateConfigurationFromEngines(preset.engineIds)
} else {
// Custom mode - keep current selection or clear
if (selectedEngineIds.length === 0) {
onConfigurationChange("")
}
}
}, [onPresetChange, onEngineIdsChange, updateConfigurationFromEngines, selectedEngineIds.length, onConfigurationChange])
const handleEngineToggle = useCallback((engineId: number, checked: boolean) => {
let newEngineIds: number[]
if (checked) {
newEngineIds = [...selectedEngineIds, engineId]
} else {
newEngineIds = selectedEngineIds.filter((id) => id !== engineId)
}
onEngineIdsChange(newEngineIds)
updateConfigurationFromEngines(newEngineIds)
}, [selectedEngineIds, onEngineIdsChange, updateConfigurationFromEngines])
return (
<div className={cn("flex flex-col h-full", className)}>
<div className="flex-1 overflow-y-auto p-6">
{/* Compact preset cards */}
<div className="grid grid-cols-4 gap-3 mb-4">
{enginePresets.map((preset) => {
const isActive = selectedPresetId === preset.id
const PresetIcon = preset.icon
const matchedEngines = preset.id === "custom"
? []
: engines?.filter(e => preset.engineIds.includes(e.id)) || []
return (
<button
key={preset.id}
type="button"
onClick={() => handlePresetSelect(preset)}
disabled={disabled}
className={cn(
"flex flex-col items-center p-3 rounded-lg border-2 text-center transition-all",
isActive
? "border-primary bg-primary/5"
: "border-border hover:border-primary/50 hover:bg-muted/30",
disabled && "opacity-50 cursor-not-allowed"
)}
>
<div className={cn(
"flex h-10 w-10 items-center justify-center rounded-lg mb-2",
isActive ? "bg-primary text-primary-foreground" : "bg-muted"
)}>
<PresetIcon className="h-5 w-5" />
</div>
<span className="text-sm font-medium">{preset.label}</span>
{preset.id !== "custom" && (
<span className="text-xs text-muted-foreground mt-1">
{matchedEngines.length} {t("presets.enginesCount")}
</span>
)}
</button>
)
})}
</div>
{/* Selected preset details */}
{selectedPresetId && selectedPresetId !== "custom" && (
<div className="border rounded-lg p-4 bg-muted/10">
<div className="flex items-start justify-between mb-3">
<div>
<h3 className="font-medium">{selectedPreset?.label}</h3>
<p className="text-sm text-muted-foreground mt-1">{selectedPreset?.description}</p>
</div>
</div>
{/* Capabilities */}
<div className="mb-4">
<h4 className="text-xs font-medium text-muted-foreground mb-2">{t("presets.capabilities")}</h4>
<div className="flex flex-wrap gap-1.5">
{selectedCapabilities.map((capKey) => {
const config = CAPABILITY_CONFIG[capKey]
return (
<Badge key={capKey} variant="outline" className={cn("text-xs", config?.color)}>
{tStages(capKey)}
</Badge>
)
})}
</div>
</div>
{/* Engines list */}
<div>
<h4 className="text-xs font-medium text-muted-foreground mb-2">{t("presets.usedEngines")}</h4>
<div className="flex flex-wrap gap-2">
{presetEngines.map((engine) => (
<span key={engine.id} className="text-sm px-3 py-1.5 bg-background rounded-md border">
{engine.name}
</span>
))}
</div>
</div>
</div>
)}
{/* Custom mode engine selection */}
{selectedPresetId === "custom" && (
<div className="border rounded-lg p-4 bg-muted/10">
<div className="flex items-start justify-between mb-3">
<div>
<h3 className="font-medium">{selectedPreset?.label}</h3>
<p className="text-sm text-muted-foreground mt-1">{selectedPreset?.description}</p>
</div>
</div>
{/* Capabilities - dynamically calculated from selected engines */}
<div className="mb-4">
<h4 className="text-xs font-medium text-muted-foreground mb-2">{t("presets.capabilities")}</h4>
<div className="flex flex-wrap gap-1.5">
{selectedCapabilities.length > 0 ? (
selectedCapabilities.map((capKey) => {
const config = CAPABILITY_CONFIG[capKey]
return (
<Badge key={capKey} variant="outline" className={cn("text-xs", config?.color)}>
{tStages(capKey)}
</Badge>
)
})
) : (
<span className="text-xs text-muted-foreground">{t("presets.noCapabilities")}</span>
)}
</div>
</div>
{/* Engines list - selectable */}
<div>
<h4 className="text-xs font-medium text-muted-foreground mb-2">{t("presets.usedEngines")}</h4>
<div className="flex flex-wrap gap-2">
{engines?.map((engine) => {
const isSelected = selectedEngineIds.includes(engine.id)
return (
<label
key={engine.id}
htmlFor={`preset-engine-${engine.id}`}
className={cn(
"flex items-center gap-2 px-3 py-1.5 rounded-md cursor-pointer transition-all border",
isSelected
? "bg-primary/10 border-primary/30"
: "hover:bg-muted/50 border-border",
disabled && "opacity-50 cursor-not-allowed"
)}
>
<Checkbox
id={`preset-engine-${engine.id}`}
checked={isSelected}
onCheckedChange={(checked) => {
handleEngineToggle(engine.id, checked as boolean)
}}
disabled={disabled}
className="h-4 w-4"
/>
<span className="text-sm">{engine.name}</span>
</label>
)
})}
</div>
</div>
</div>
)}
{/* Empty state */}
{!selectedPresetId && (
<div className="flex flex-col items-center justify-center py-12 text-muted-foreground">
<Server className="h-12 w-12 mb-4 opacity-50" />
<p className="text-sm">{t("presets.selectHint")}</p>
</div>
)}
</div>
</div>
)
}

View File

@@ -1,7 +1,7 @@
"use client"
import React, { useState, useMemo } from "react"
import { Play, Settings2 } from "lucide-react"
import React, { useState, useMemo, useCallback } from "react"
import { Play, Server, Settings, ChevronLeft, ChevronRight } from "lucide-react"
import { useTranslations } from "next-intl"
import { Button } from "@/components/ui/button"
@@ -9,15 +9,22 @@ import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog"
import { Badge } from "@/components/ui/badge"
import { Checkbox } from "@/components/ui/checkbox"
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
} from "@/components/ui/alert-dialog"
import { LoadingSpinner } from "@/components/loading-spinner"
import { cn } from "@/lib/utils"
import { CAPABILITY_CONFIG, getEngineIcon, parseEngineCapabilities } from "@/lib/engine-config"
import { EnginePresetSelector } from "./engine-preset-selector"
import { ScanConfigEditor } from "./scan-config-editor"
import type { Organization } from "@/types/organization.types"
@@ -46,36 +53,78 @@ export function InitiateScanDialog({
}: InitiateScanDialogProps) {
const t = useTranslations("scan.initiate")
const tToast = useTranslations("toast")
const tCommon = useTranslations("common.actions")
const [step, setStep] = useState(1)
const [selectedEngineIds, setSelectedEngineIds] = useState<number[]>([])
const [isSubmitting, setIsSubmitting] = useState(false)
const [selectedPresetId, setSelectedPresetId] = useState<string | null>(null)
// Configuration state management
const [configuration, setConfiguration] = useState("")
const [isConfigEdited, setIsConfigEdited] = useState(false)
const [isYamlValid, setIsYamlValid] = useState(true)
const [showOverwriteConfirm, setShowOverwriteConfirm] = useState(false)
const [pendingConfigChange, setPendingConfigChange] = useState<string | null>(null)
const { data: engines, isLoading, error } = useEngines()
const { data: engines } = useEngines()
const steps = [
{ id: 1, title: t("steps.selectEngine"), icon: Server },
{ id: 2, title: t("steps.editConfig"), icon: Settings },
]
const selectedEngines = useMemo(() => {
if (!selectedEngineIds.length || !engines) return []
return engines.filter((e) => selectedEngineIds.includes(e.id))
}, [selectedEngineIds, engines])
const selectedCapabilities = useMemo(() => {
if (!selectedEngines.length) return []
const allCaps = new Set<string>()
selectedEngines.forEach((engine) => {
parseEngineCapabilities(engine.configuration || "").forEach((cap) => allCaps.add(cap))
})
return Array.from(allCaps)
}, [selectedEngines])
const handleEngineToggle = (engineId: number, checked: boolean) => {
if (checked) {
setSelectedEngineIds((prev) => [...prev, engineId])
// Handle configuration change from preset selector (may need confirmation)
const handlePresetConfigChange = useCallback((value: string) => {
if (isConfigEdited && configuration !== value) {
setPendingConfigChange(value)
setShowOverwriteConfirm(true)
} else {
setSelectedEngineIds((prev) => prev.filter((id) => id !== engineId))
setConfiguration(value)
setIsConfigEdited(false)
}
}, [isConfigEdited, configuration])
// Handle manual config editing
const handleManualConfigChange = useCallback((value: string) => {
setConfiguration(value)
setIsConfigEdited(true)
}, [])
const handleEngineIdsChange = useCallback((engineIds: number[]) => {
setSelectedEngineIds(engineIds)
}, [])
const handleOverwriteConfirm = () => {
if (pendingConfigChange !== null) {
setConfiguration(pendingConfigChange)
setIsConfigEdited(false)
}
setShowOverwriteConfirm(false)
setPendingConfigChange(null)
}
const handleOverwriteCancel = () => {
setShowOverwriteConfirm(false)
setPendingConfigChange(null)
}
const handleYamlValidationChange = (isValid: boolean) => {
setIsYamlValid(isValid)
}
const handleInitiate = async () => {
if (!selectedEngineIds.length) return
if (selectedEngineIds.length === 0) {
toast.error(tToast("noEngineSelected"))
return
}
if (!configuration.trim()) {
toast.error(tToast("emptyConfig"))
return
}
if (!organizationId && !targetId) {
toast.error(tToast("paramError"), { description: tToast("paramErrorDesc") })
return
@@ -85,7 +134,9 @@ export function InitiateScanDialog({
const response = await initiateScan({
organizationId,
targetId,
configuration,
engineIds: selectedEngineIds,
engineNames: selectedEngines.map(e => e.name),
})
// 后端返回 201 说明成功创建扫描任务
@@ -96,19 +147,14 @@ export function InitiateScanDialog({
onSuccess?.()
onOpenChange(false)
setSelectedEngineIds([])
setConfiguration("")
setIsConfigEdited(false)
} catch (err: unknown) {
console.error("Failed to initiate scan:", err)
// 处理配置冲突错误
const error = err as { response?: { data?: { error?: { code?: string; message?: string } } } }
if (error?.response?.data?.error?.code === 'CONFIG_CONFLICT') {
toast.error(tToast("configConflict"), {
description: error.response.data.error.message,
})
} else {
toast.error(tToast("initiateScanFailed"), {
description: err instanceof Error ? err.message : tToast("unknownError"),
})
}
toast.error(tToast("initiateScanFailed"), {
description: error?.response?.data?.error?.message || (err instanceof Error ? err.message : tToast("unknownError")),
})
} finally {
setIsSubmitting(false)
}
@@ -117,158 +163,127 @@ export function InitiateScanDialog({
const handleOpenChange = (newOpen: boolean) => {
if (!isSubmitting) {
onOpenChange(newOpen)
if (!newOpen) setSelectedEngineIds([])
if (!newOpen) {
setStep(1)
setSelectedPresetId(null)
setSelectedEngineIds([])
setConfiguration("")
setIsConfigEdited(false)
}
}
}
const canProceedToStep2 = selectedPresetId !== null && selectedEngineIds.length > 0
const canSubmit = selectedEngineIds.length > 0 && configuration.trim().length > 0 && isYamlValid
return (
<Dialog open={open} onOpenChange={handleOpenChange}>
<DialogContent className="max-w-[90vw] sm:max-w-[900px] p-0 gap-0">
<DialogHeader className="px-6 pt-6 pb-4">
<DialogTitle className="flex items-center gap-2">
<Play className="h-5 w-5" />
{t("title")}
<span className="text-sm font-normal text-muted-foreground">
{targetName ? (
<>
{t("targetDesc")} <span className="font-medium text-foreground">{targetName}</span> {t("selectEngine")}
</>
) : (
<>
{t("orgDesc")} <span className="font-medium text-foreground">{organization?.name}</span> {t("selectEngine")}
</>
)}
</span>
</DialogTitle>
</DialogHeader>
<div className="flex border-t h-[480px]">
{/* Left side engine list */}
<div className="w-[260px] border-r flex flex-col shrink-0">
<div className="px-4 py-3 border-b bg-muted/30 shrink-0">
<h3 className="text-sm font-medium">
{t("selectEngineTitle")}
{selectedEngineIds.length > 0 && (
<span className="text-xs text-muted-foreground font-normal ml-2">
{t("selectedCount", { count: selectedEngineIds.length })}
</span>
)}
</h3>
</div>
<div className="flex-1 overflow-y-auto">
<div className="p-2">
{isLoading ? (
<div className="flex items-center justify-center py-8">
<LoadingSpinner />
<span className="ml-2 text-sm text-muted-foreground">{t("loading")}</span>
</div>
) : error ? (
<div className="py-8 text-center text-sm text-destructive">{t("loadFailed")}</div>
) : !engines?.length ? (
<div className="py-8 text-center text-sm text-muted-foreground">{t("noEngines")}</div>
<div className="flex items-center justify-between">
<div>
<DialogTitle className="flex items-center gap-2">
<Play className="h-5 w-5" />
{t("title")}
</DialogTitle>
<DialogDescription className="mt-1">
{targetName ? (
<>{t("targetDesc")} <span className="font-medium text-foreground">{targetName}</span></>
) : (
<div className="space-y-1">
{engines.map((engine) => {
const capabilities = parseEngineCapabilities(engine.configuration || "")
const EngineIcon = getEngineIcon(capabilities)
const primaryCap = capabilities[0]
const iconConfig = primaryCap ? CAPABILITY_CONFIG[primaryCap] : null
const isSelected = selectedEngineIds.includes(engine.id)
return (
<label
key={engine.id}
htmlFor={`engine-${engine.id}`}
className={cn(
"flex items-center gap-3 p-3 rounded-lg cursor-pointer transition-all",
isSelected
? "bg-primary/10 border border-primary/30"
: "hover:bg-muted/50 border border-transparent"
)}
>
<Checkbox
id={`engine-${engine.id}`}
checked={isSelected}
onCheckedChange={(checked) => handleEngineToggle(engine.id, checked as boolean)}
disabled={isSubmitting}
/>
<div
className={cn(
"flex h-8 w-8 items-center justify-center rounded-md shrink-0",
iconConfig?.color || "bg-muted text-muted-foreground"
)}
>
<EngineIcon className="h-4 w-4" />
</div>
<div className="flex-1 min-w-0">
<div className="font-medium text-sm truncate">{engine.name}</div>
<div className="text-xs text-muted-foreground">
{capabilities.length > 0 ? t("capabilities", { count: capabilities.length }) : t("noConfig")}
</div>
</div>
</label>
)
})}
</div>
<>{t("orgDesc")} <span className="font-medium text-foreground">{organization?.name}</span></>
)}
</div>
</DialogDescription>
</div>
{/* Step indicator */}
<div className="text-sm text-muted-foreground mr-8">
{t("stepIndicator", { current: step, total: steps.length })}
</div>
</div>
</DialogHeader>
{/* Right side engine details */}
<div className="flex-1 flex flex-col min-w-0 overflow-hidden w-0">
{selectedEngines.length > 0 ? (
<>
<div className="px-4 py-3 border-b bg-muted/30 shrink-0 min-w-0">
<div className="flex flex-wrap gap-1.5">
{selectedCapabilities.map((capKey) => {
const config = CAPABILITY_CONFIG[capKey]
return (
<Badge key={capKey} variant="outline" className={cn("text-xs", config?.color)}>
{config?.label || capKey}
</Badge>
)
})}
</div>
</div>
<div className="flex-1 flex flex-col overflow-hidden p-4 min-w-0">
<div className="flex-1 bg-muted/50 rounded-lg border overflow-hidden min-h-0 min-w-0">
<pre className="h-full p-3 text-xs font-mono overflow-auto whitespace-pre-wrap break-all">
{selectedEngines.map((e) => e.configuration || `# ${t("noConfig")}`).join("\n\n")}
</pre>
</div>
</div>
</>
<div className="border-t h-[480px] overflow-hidden">
{/* Step 1: Select preset/engines */}
{step === 1 && engines && (
<EnginePresetSelector
engines={engines}
selectedEngineIds={selectedEngineIds}
selectedPresetId={selectedPresetId}
onPresetChange={setSelectedPresetId}
onEngineIdsChange={handleEngineIdsChange}
onConfigurationChange={handlePresetConfigChange}
disabled={isSubmitting}
/>
)}
{/* Step 2: Edit configuration */}
{step === 2 && (
<ScanConfigEditor
configuration={configuration}
onChange={handleManualConfigChange}
onValidationChange={handleYamlValidationChange}
selectedEngines={selectedEngines}
isConfigEdited={isConfigEdited}
disabled={isSubmitting}
/>
)}
</div>
<div className="px-6 py-4 border-t flex items-center justify-between">
<div className="text-sm text-muted-foreground">
{step === 1 && selectedEngineIds.length > 0 && (
<span className="text-primary">{t("selectedCount", { count: selectedEngineIds.length })}</span>
)}
</div>
<div className="flex gap-2">
{step > 1 && (
<Button variant="outline" onClick={() => setStep(step - 1)} disabled={isSubmitting}>
<ChevronLeft className="h-4 w-4 mr-1" />
{t("back")}
</Button>
)}
{step === 1 ? (
<Button onClick={() => setStep(2)} disabled={!canProceedToStep2}>
{t("next")}
<ChevronRight className="h-4 w-4 ml-1" />
</Button>
) : (
<div className="flex-1 flex items-center justify-center text-muted-foreground">
<div className="text-center">
<Settings2 className="h-10 w-10 mx-auto mb-3 opacity-30" />
<p className="text-sm">{t("selectEngineHint")}</p>
</div>
</div>
<Button onClick={handleInitiate} disabled={!canSubmit || isSubmitting}>
{isSubmitting ? (
<>
<LoadingSpinner />
{t("initiating")}
</>
) : (
<>
<Play className="h-4 w-4" />
{t("startScan")}
</>
)}
</Button>
)}
</div>
</div>
<DialogFooter className="px-6 py-4 border-t">
<Button variant="outline" onClick={() => handleOpenChange(false)} disabled={isSubmitting}>
{tCommon("cancel")}
</Button>
<Button onClick={handleInitiate} disabled={!selectedEngineIds.length || isSubmitting}>
{isSubmitting ? (
<>
<LoadingSpinner />
{t("initiating")}
</>
) : (
<>
<Play className="h-4 w-4" />
{t("startScan")}
</>
)}
</Button>
</DialogFooter>
</DialogContent>
{/* Overwrite confirmation dialog */}
<AlertDialog open={showOverwriteConfirm} onOpenChange={setShowOverwriteConfirm}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("overwriteConfirm.title")}</AlertDialogTitle>
<AlertDialogDescription>
{t("overwriteConfirm.description")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel onClick={handleOverwriteCancel}>
{t("overwriteConfirm.cancel")}
</AlertDialogCancel>
<AlertDialogAction onClick={handleOverwriteConfirm}>
{t("overwriteConfirm.confirm")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
</Dialog>
)
}

View File

@@ -11,18 +11,27 @@ import {
DialogTitle,
DialogTrigger,
} from "@/components/ui/dialog"
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
} from "@/components/ui/alert-dialog"
import { Button } from "@/components/ui/button"
import { Textarea } from "@/components/ui/textarea"
import { Badge } from "@/components/ui/badge"
import { Checkbox } from "@/components/ui/checkbox"
import { LoadingSpinner } from "@/components/loading-spinner"
import { cn } from "@/lib/utils"
import { toast } from "sonner"
import { Zap, Settings2, AlertCircle, ChevronRight, ChevronLeft, Target, Server } from "lucide-react"
import { Zap, AlertCircle, ChevronRight, ChevronLeft, Target, Server, Settings } from "lucide-react"
import { quickScan } from "@/services/scan.service"
import { CAPABILITY_CONFIG, getEngineIcon, parseEngineCapabilities } from "@/lib/engine-config"
import { TargetValidator } from "@/lib/target-validator"
import { useEngines } from "@/hooks/use-engines"
import { EnginePresetSelector } from "./engine-preset-selector"
import { ScanConfigEditor } from "./scan-config-editor"
interface QuickScanDialogProps {
trigger?: React.ReactNode
@@ -36,8 +45,16 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
const [targetInput, setTargetInput] = React.useState("")
const [selectedEngineIds, setSelectedEngineIds] = React.useState<number[]>([])
const [selectedPresetId, setSelectedPresetId] = React.useState<string | null>(null)
const { data: engines, isLoading, error } = useEngines()
// Configuration state management
const [configuration, setConfiguration] = React.useState("")
const [isConfigEdited, setIsConfigEdited] = React.useState(false)
const [isYamlValid, setIsYamlValid] = React.useState(true)
const [showOverwriteConfirm, setShowOverwriteConfirm] = React.useState(false)
const [pendingConfigChange, setPendingConfigChange] = React.useState<string | null>(null)
const { data: engines } = useEngines()
const lineNumbersRef = React.useRef<HTMLDivElement | null>(null)
@@ -61,18 +78,12 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
return engines.filter(e => selectedEngineIds.includes(e.id))
}, [selectedEngineIds, engines])
const selectedCapabilities = React.useMemo(() => {
if (!selectedEngines.length) return []
const allCaps = new Set<string>()
selectedEngines.forEach((engine) => {
parseEngineCapabilities(engine.configuration || "").forEach((cap) => allCaps.add(cap))
})
return Array.from(allCaps)
}, [selectedEngines])
const resetForm = () => {
setTargetInput("")
setSelectedEngineIds([])
setSelectedPresetId(null)
setConfiguration("")
setIsConfigEdited(false)
setStep(1)
}
@@ -81,19 +92,52 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
if (!isOpen) resetForm()
}
const handleEngineToggle = (engineId: number, checked: boolean) => {
if (checked) {
setSelectedEngineIds((prev) => [...prev, engineId])
// Handle configuration change from preset selector (may need confirmation)
const handlePresetConfigChange = React.useCallback((value: string) => {
if (isConfigEdited && configuration !== value) {
setPendingConfigChange(value)
setShowOverwriteConfirm(true)
} else {
setSelectedEngineIds((prev) => prev.filter((id) => id !== engineId))
setConfiguration(value)
setIsConfigEdited(false)
}
}, [isConfigEdited, configuration])
// Handle manual config editing
const handleManualConfigChange = React.useCallback((value: string) => {
setConfiguration(value)
setIsConfigEdited(true)
}, [])
const handleEngineIdsChange = React.useCallback((engineIds: number[]) => {
setSelectedEngineIds(engineIds)
}, [])
const handleOverwriteConfirm = () => {
if (pendingConfigChange !== null) {
setConfiguration(pendingConfigChange)
setIsConfigEdited(false)
}
setShowOverwriteConfirm(false)
setPendingConfigChange(null)
}
const handleOverwriteCancel = () => {
setShowOverwriteConfirm(false)
setPendingConfigChange(null)
}
const handleYamlValidationChange = (isValid: boolean) => {
setIsYamlValid(isValid)
}
const canProceedToStep2 = validInputs.length > 0 && !hasErrors
const canSubmit = selectedEngineIds.length > 0
const canProceedToStep3 = selectedPresetId !== null && selectedEngineIds.length > 0
const canSubmit = selectedEngineIds.length > 0 && configuration.trim().length > 0 && isYamlValid
const handleNext = () => {
if (step === 1 && canProceedToStep2) setStep(2)
else if (step === 2 && canProceedToStep3) setStep(3)
}
const handleBack = () => {
@@ -103,6 +147,7 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
const steps = [
{ id: 1, title: t("step1Title"), icon: Target },
{ id: 2, title: t("step2Title"), icon: Server },
{ id: 3, title: t("step3Title"), icon: Settings },
]
const handleSubmit = async () => {
@@ -118,6 +163,10 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
toast.error(t("toast.selectEngine"))
return
}
if (!configuration.trim()) {
toast.error(t("toast.emptyConfig"))
return
}
const targets = validInputs.map(r => r.originalInput)
@@ -125,7 +174,9 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
try {
const response = await quickScan({
targets: targets.map(name => ({ name })),
configuration,
engineIds: selectedEngineIds,
engineNames: selectedEngines.map(e => e.name),
})
const { targetStats, scans, count } = response
@@ -139,13 +190,7 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
handleClose(false)
} catch (error: unknown) {
const err = error as { response?: { data?: { error?: { code?: string; message?: string }; detail?: string } } }
if (err?.response?.data?.error?.code === 'CONFIG_CONFLICT') {
toast.error(t("toast.configConflict"), {
description: err.response.data.error.message,
})
} else {
toast.error(err?.response?.data?.detail || err?.response?.data?.error?.message || t("toast.createFailed"))
}
toast.error(err?.response?.data?.detail || err?.response?.data?.error?.message || t("toast.createFailed"))
} finally {
setIsSubmitting(false)
}
@@ -179,36 +224,8 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
</DialogDescription>
</div>
{/* Step indicator */}
<div className="flex items-center gap-2 mr-8">
{steps.map((s, index) => (
<React.Fragment key={s.id}>
<button
type="button"
onClick={() => {
if (s.id < step) setStep(s.id)
else if (s.id === 2 && canProceedToStep2) setStep(2)
}}
className={cn(
"flex items-center gap-1.5 px-3 py-1.5 rounded-full text-sm transition-colors",
step === s.id
? "bg-primary text-primary-foreground"
: step > s.id
? "bg-primary/20 text-primary cursor-pointer hover:bg-primary/30"
: "bg-muted text-muted-foreground"
)}
disabled={s.id > step && !(s.id === 2 && canProceedToStep2)}
>
<s.icon className="h-4 w-4" />
<span className="hidden sm:inline">{s.title}</span>
</button>
{index < steps.length - 1 && (
<div className={cn(
"w-8 h-[2px]",
step > s.id ? "bg-primary/50" : "bg-muted"
)} />
)}
</React.Fragment>
))}
<div className="text-sm text-muted-foreground mr-8">
{t("stepIndicator", { current: step, total: steps.length })}
</div>
</div>
</DialogHeader>
@@ -259,118 +276,30 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
</div>
)}
{/* Step 2: Select engines */}
{step === 2 && (
<div className="flex h-full">
<div className="w-[320px] border-r flex flex-col shrink-0">
<div className="px-4 py-3 border-b bg-muted/30 shrink-0">
<h3 className="text-sm font-medium">{t("selectEngine")}</h3>
{selectedEngineIds.length > 0 && (
<p className="text-xs text-muted-foreground mt-1">
{t("selectedCount", { count: selectedEngineIds.length })}
</p>
)}
</div>
<div className="flex-1 overflow-y-auto">
<div className="p-2">
{isLoading ? (
<div className="flex items-center justify-center py-8">
<LoadingSpinner />
<span className="ml-2 text-sm text-muted-foreground">{t("loading")}</span>
</div>
) : error ? (
<div className="py-8 text-center text-sm text-destructive">{t("loadFailed")}</div>
) : !engines?.length ? (
<div className="py-8 text-center text-sm text-muted-foreground">{t("noEngines")}</div>
) : (
<div className="space-y-1">
{engines.map((engine) => {
const capabilities = parseEngineCapabilities(engine.configuration || "")
const EngineIcon = getEngineIcon(capabilities)
const primaryCap = capabilities[0]
const iconConfig = primaryCap ? CAPABILITY_CONFIG[primaryCap] : null
const isSelected = selectedEngineIds.includes(engine.id)
return (
<label
key={engine.id}
htmlFor={`quick-engine-${engine.id}`}
className={cn(
"flex items-center gap-3 p-3 rounded-lg cursor-pointer transition-all",
isSelected
? "bg-primary/10 border border-primary/30"
: "hover:bg-muted/50 border border-transparent"
)}
>
<Checkbox
id={`quick-engine-${engine.id}`}
checked={isSelected}
onCheckedChange={(checked) => handleEngineToggle(engine.id, checked as boolean)}
disabled={isSubmitting}
/>
<div
className={cn(
"flex h-8 w-8 items-center justify-center rounded-md shrink-0",
iconConfig?.color || "bg-muted text-muted-foreground"
)}
>
<EngineIcon className="h-4 w-4" />
</div>
<div className="flex-1 min-w-0">
<div className="font-medium text-sm truncate">{engine.name}</div>
<div className="text-xs text-muted-foreground">
{capabilities.length > 0 ? t("capabilities", { count: capabilities.length }) : t("noConfig")}
</div>
</div>
</label>
)
})}
</div>
)}
</div>
</div>
</div>
<div className="flex-1 flex flex-col min-w-0 overflow-hidden">
{selectedEngines.length > 0 ? (
<>
<div className="px-4 py-3 border-b bg-muted/30 flex items-center gap-2 shrink-0">
<Settings2 className="h-4 w-4 text-muted-foreground" />
<h3 className="text-sm font-medium truncate">
{selectedEngines.map((e) => e.name).join(", ")}
</h3>
</div>
<div className="flex-1 flex flex-col overflow-hidden p-4 gap-3">
{selectedCapabilities.length > 0 && (
<div className="flex flex-wrap gap-1.5 shrink-0">
{selectedCapabilities.map((capKey) => {
const config = CAPABILITY_CONFIG[capKey]
return (
<Badge key={capKey} variant="outline" className={cn("text-xs", config?.color)}>
{config?.label || capKey}
</Badge>
)
})}
</div>
)}
<div className="flex-1 bg-muted/50 rounded-lg border overflow-hidden min-h-0">
<pre className="h-full p-3 text-xs font-mono overflow-auto whitespace-pre-wrap break-all">
{selectedEngines.map((e) => e.configuration || `# ${t("noConfig")}`).join("\n\n")}
</pre>
</div>
</div>
</>
) : (
<div className="flex-1 flex items-center justify-center text-muted-foreground">
<div className="text-center">
<Settings2 className="h-10 w-10 mx-auto mb-3 opacity-30" />
<p className="text-sm">{t("selectEngineHint")}</p>
</div>
</div>
)}
</div>
</div>
{/* Step 2: Select preset/engines */}
{step === 2 && engines && (
<EnginePresetSelector
engines={engines}
selectedEngineIds={selectedEngineIds}
selectedPresetId={selectedPresetId}
onPresetChange={setSelectedPresetId}
onEngineIdsChange={handleEngineIdsChange}
onConfigurationChange={handlePresetConfigChange}
disabled={isSubmitting}
/>
)}
{/* Step 3: Edit configuration */}
{step === 3 && (
<ScanConfigEditor
configuration={configuration}
onChange={handleManualConfigChange}
onValidationChange={handleYamlValidationChange}
selectedEngines={selectedEngines}
isConfigEdited={isConfigEdited}
disabled={isSubmitting}
/>
)}
</div>
<DialogFooter className="px-4 py-4 border-t !flex !items-center !justify-between">
@@ -392,10 +321,10 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
{t("back")}
</Button>
)}
{step === 1 ? (
{step < 3 ? (
<Button
onClick={handleNext}
disabled={!canProceedToStep2}
disabled={step === 1 ? !canProceedToStep2 : !canProceedToStep3}
>
{t("next")}
<ChevronRight className="h-4 w-4 ml-1" />
@@ -418,6 +347,26 @@ export function QuickScanDialog({ trigger }: QuickScanDialogProps) {
</div>
</DialogFooter>
</DialogContent>
{/* Overwrite confirmation dialog */}
<AlertDialog open={showOverwriteConfirm} onOpenChange={setShowOverwriteConfirm}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("overwriteConfirm.title")}</AlertDialogTitle>
<AlertDialogDescription>
{t("overwriteConfirm.description")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel onClick={handleOverwriteCancel}>
{t("overwriteConfirm.cancel")}
</AlertDialogCancel>
<AlertDialogAction onClick={handleOverwriteConfirm}>
{t("overwriteConfirm.confirm")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
</Dialog>
)
}

View File

@@ -0,0 +1,86 @@
"use client"
import React, { useMemo } from "react"
import { useTranslations } from "next-intl"
import { Badge } from "@/components/ui/badge"
import { YamlEditor } from "@/components/ui/yaml-editor"
import { cn } from "@/lib/utils"
import { CAPABILITY_CONFIG, parseEngineCapabilities } from "@/lib/engine-config"
import type { ScanEngine } from "@/types/engine.types"
interface ScanConfigEditorProps {
configuration: string
onChange: (value: string) => void
onValidationChange?: (isValid: boolean) => void
selectedEngines?: ScanEngine[]
selectedCapabilities?: string[]
isConfigEdited?: boolean
disabled?: boolean
showCapabilities?: boolean
className?: string
}
export function ScanConfigEditor({
configuration,
onChange,
onValidationChange,
selectedEngines = [],
selectedCapabilities: propCapabilities,
isConfigEdited = false,
disabled = false,
showCapabilities = true,
className,
}: ScanConfigEditorProps) {
const t = useTranslations("scan.initiate")
const tStages = useTranslations("scan.progress.stages")
// Calculate capabilities from selected engines if not provided
const capabilities = useMemo(() => {
if (propCapabilities) return propCapabilities
if (!selectedEngines.length) return []
const allCaps = new Set<string>()
selectedEngines.forEach((engine) => {
parseEngineCapabilities(engine.configuration || "").forEach((cap) => allCaps.add(cap))
})
return Array.from(allCaps)
}, [selectedEngines, propCapabilities])
return (
<div className={cn("flex flex-col h-full", className)}>
{/* Capabilities header */}
{showCapabilities && (
<div className="px-4 py-2 border-b bg-muted/30 flex items-center gap-2 shrink-0">
{capabilities.length > 0 && (
<div className="flex flex-wrap gap-1">
{capabilities.map((capKey) => {
const config = CAPABILITY_CONFIG[capKey]
return (
<Badge key={capKey} variant="outline" className={cn("text-xs py-0", config?.color)}>
{tStages(capKey)}
</Badge>
)
})}
</div>
)}
{isConfigEdited && (
<Badge variant="outline" className="ml-auto text-xs">
{t("configEdited")}
</Badge>
)}
</div>
)}
{/* YAML Editor */}
<div className="flex-1 overflow-hidden">
<YamlEditor
value={configuration}
onChange={onChange}
disabled={disabled}
onValidationChange={onValidationChange}
/>
</div>
</div>
)
}

View File

@@ -0,0 +1,111 @@
"use client"
import { useEffect, useRef, useMemo } from "react"
import type { ScanLog } from "@/services/scan.service"
interface ScanLogListProps {
logs: ScanLog[]
loading?: boolean
}
/**
* 格式化时间为 HH:mm:ss
*/
function formatTime(isoString: string): string {
try {
const date = new Date(isoString)
return date.toLocaleTimeString('zh-CN', {
hour: '2-digit',
minute: '2-digit',
second: '2-digit',
hour12: false,
})
} catch {
return isoString
}
}
/**
* HTML 转义,防止 XSS
*/
function escapeHtml(text: string): string {
return text
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#039;')
}
/**
* 扫描日志列表组件
*
* 特性:
* - 预渲染 HTML 字符串,减少 DOM 节点提升性能
* - 颜色区分info=默认, warning=黄色, error=红色
* - 自动滚动到底部
*/
export function ScanLogList({ logs, loading }: ScanLogListProps) {
const containerRef = useRef<HTMLDivElement>(null)
const isAtBottomRef = useRef(true) // 跟踪用户是否在底部
// 预渲染 HTML 字符串
const htmlContent = useMemo(() => {
if (logs.length === 0) return ''
return logs.map(log => {
const time = formatTime(log.createdAt)
const content = escapeHtml(log.content)
const levelStyle = log.level === 'error'
? 'color:#ef4444'
: log.level === 'warning'
? 'color:#eab308'
: ''
return `<div style="line-height:1.625;word-break:break-all;${levelStyle}"><span style="color:#6b7280">${time}</span> ${content}</div>`
}).join('')
}, [logs])
// 监听滚动事件,检测用户是否在底部
useEffect(() => {
const container = containerRef.current
if (!container) return
const handleScroll = () => {
const { scrollTop, scrollHeight, clientHeight } = container
// 允许 30px 的容差,认为在底部附近
isAtBottomRef.current = scrollHeight - scrollTop - clientHeight < 30
}
container.addEventListener('scroll', handleScroll)
return () => container.removeEventListener('scroll', handleScroll)
}, [])
// 只有用户在底部时才自动滚动
useEffect(() => {
if (containerRef.current && isAtBottomRef.current) {
containerRef.current.scrollTop = containerRef.current.scrollHeight
}
}, [htmlContent])
return (
<div
ref={containerRef}
className="h-[400px] overflow-y-auto font-mono text-[11px] p-3 bg-muted/30 rounded-lg"
>
{logs.length === 0 && !loading && (
<div className="text-muted-foreground text-center py-8">
</div>
)}
{htmlContent && (
<div dangerouslySetInnerHTML={{ __html: htmlContent }} />
)}
{loading && logs.length === 0 && (
<div className="text-muted-foreground text-center py-8">
...
</div>
)}
</div>
)
}

View File

@@ -1,6 +1,7 @@
"use client"
import * as React from "react"
import { useState } from "react"
import {
Dialog,
DialogContent,
@@ -9,6 +10,7 @@ import {
} from "@/components/ui/dialog"
import { Badge } from "@/components/ui/badge"
import { Separator } from "@/components/ui/separator"
import { Tabs, TabsList, TabsTrigger } from "@/components/ui/tabs"
import {
IconCircleCheck,
IconLoader,
@@ -19,6 +21,8 @@ import {
import { cn } from "@/lib/utils"
import { useTranslations, useLocale } from "next-intl"
import type { ScanStage, ScanRecord, StageProgress, StageStatus } from "@/types/scan.types"
import { useScanLogs } from "@/hooks/use-scan-logs"
import { ScanLogList } from "./scan-log-list"
/**
* Scan stage details
@@ -190,12 +194,26 @@ export function ScanProgressDialog({
}: ScanProgressDialogProps) {
const t = useTranslations("scan.progress")
const locale = useLocale()
const [activeTab, setActiveTab] = useState<'stages' | 'logs'>('stages')
// 判断扫描是否正在运行(用于控制轮询)
const isRunning = data?.status === 'running' || data?.status === 'initiated'
// 日志轮询 Hook
const { logs, loading: logsLoading } = useScanLogs({
scanId: data?.id ?? 0,
enabled: open && activeTab === 'logs' && !!data?.id,
pollingInterval: isRunning ? 3000 : 0, // 运行中时 3s 轮询,否则不轮询
})
if (!data) return null
// 固定宽度,切换 Tab 时不变化
const dialogWidth = 'sm:max-w-[600px] sm:min-w-[550px]'
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogContent className="sm:max-w-[500px]">
<DialogContent className={cn(dialogWidth, "transition-all duration-200")}>
<DialogHeader>
<DialogTitle className="flex items-center gap-2">
<ScanStatusIcon status={data.status} />
@@ -209,9 +227,19 @@ export function ScanProgressDialog({
<span className="text-muted-foreground">{t("target")}</span>
<span className="font-medium">{data.targetName}</span>
</div>
<div className="flex items-center justify-between text-sm">
<span className="text-muted-foreground">{t("engine")}</span>
<Badge variant="secondary">{data.engineNames?.join(", ") || "-"}</Badge>
<div className="flex items-start justify-between text-sm gap-4">
<span className="text-muted-foreground shrink-0">{t("engine")}</span>
<div className="flex flex-wrap gap-1.5 justify-end">
{data.engineNames?.length ? (
data.engineNames.map((name) => (
<Badge key={name} variant="secondary" className="text-xs whitespace-nowrap">
{name}
</Badge>
))
) : (
<span className="text-muted-foreground">-</span>
)}
</div>
</div>
{data.startedAt && (
<div className="flex items-center justify-between text-sm">
@@ -234,37 +262,26 @@ export function ScanProgressDialog({
<Separator />
{/* Total progress */}
<div className="space-y-2">
<div className="flex items-center justify-between text-sm">
<span className="font-medium">{t("totalProgress")}</span>
<span className="font-mono text-muted-foreground">{data.progress}%</span>
{/* Tab 切换 */}
<Tabs value={activeTab} onValueChange={(v) => setActiveTab(v as 'stages' | 'logs')}>
<TabsList className="grid w-full grid-cols-2">
<TabsTrigger value="stages">{t("tab_stages")}</TabsTrigger>
<TabsTrigger value="logs">{t("tab_logs")}</TabsTrigger>
</TabsList>
</Tabs>
{/* Tab 内容 */}
{activeTab === 'stages' ? (
/* Stage list */
<div className="space-y-2 max-h-[300px] overflow-y-auto">
{data.stages.map((stage) => (
<StageRow key={stage.stage} stage={stage} t={t} />
))}
</div>
<div className="h-2 bg-primary/10 rounded-full overflow-hidden border border-border">
<div
className={`h-full transition-all ${
data.status === "completed" ? "bg-[#238636]/80" :
data.status === "failed" ? "bg-[#da3633]/80" :
data.status === "running" ? "bg-[#d29922]/80 progress-striped" :
data.status === "cancelled" ? "bg-[#848d97]/80" :
data.status === "cancelling" ? "bg-[#d29922]/80 progress-striped" :
data.status === "initiated" ? "bg-[#d29922]/80 progress-striped" :
"bg-muted-foreground/80"
}`}
style={{ width: `${data.status === "completed" ? 100 : data.progress}%` }}
/>
</div>
</div>
<Separator />
{/* Stage list */}
<div className="space-y-2 max-h-[300px] overflow-y-auto">
{data.stages.map((stage) => (
<StageRow key={stage.stage} stage={stage} t={t} />
))}
</div>
) : (
/* Log list */
<ScanLogList logs={logs} loading={logsLoading} />
)}
</DialogContent>
</Dialog>
)

View File

@@ -9,6 +9,16 @@ import {
DialogHeader,
DialogTitle,
} from "@/components/ui/dialog"
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
} from "@/components/ui/alert-dialog"
import { Button } from "@/components/ui/button"
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
@@ -34,6 +44,8 @@ import {
IconClock,
IconInfoCircle,
IconSearch,
IconSettings,
IconCode,
} from "@tabler/icons-react"
import { CronExpressionParser } from "cron-parser"
import cronstrue from "cronstrue/i18n"
@@ -44,9 +56,10 @@ import { useEngines } from "@/hooks/use-engines"
import { useOrganizations } from "@/hooks/use-organizations"
import { useTranslations, useLocale } from "next-intl"
import type { CreateScheduledScanRequest } from "@/types/scheduled-scan.types"
import type { ScanEngine } from "@/types/engine.types"
import type { Target } from "@/types/target.types"
import type { Organization } from "@/types/organization.types"
import { EnginePresetSelector } from "../engine-preset-selector"
import { ScanConfigEditor } from "../scan-config-editor"
interface CreateScheduledScanDialogProps {
@@ -85,14 +98,16 @@ export function CreateScheduledScanDialog({
const FULL_STEPS = [
{ id: 1, title: t("steps.basicInfo"), icon: IconInfoCircle },
{ id: 2, title: t("steps.scanMode"), icon: IconBuilding },
{ id: 3, title: t("steps.selectTarget"), icon: IconTarget },
{ id: 4, title: t("steps.scheduleSettings"), icon: IconClock },
{ id: 2, title: t("steps.selectTarget"), icon: IconTarget },
{ id: 3, title: t("steps.selectEngine"), icon: IconSettings },
{ id: 4, title: t("steps.editConfig"), icon: IconCode },
{ id: 5, title: t("steps.scheduleSettings"), icon: IconClock },
]
const PRESET_STEPS = [
{ id: 1, title: t("steps.basicInfo"), icon: IconInfoCircle },
{ id: 2, title: t("steps.scheduleSettings"), icon: IconClock },
{ id: 1, title: t("steps.selectEngine"), icon: IconSettings },
{ id: 2, title: t("steps.editConfig"), icon: IconCode },
{ id: 3, title: t("steps.scheduleSettings"), icon: IconClock },
]
const [orgSearchInput, setOrgSearchInput] = React.useState("")
@@ -120,10 +135,18 @@ export function CreateScheduledScanDialog({
const [name, setName] = React.useState("")
const [engineIds, setEngineIds] = React.useState<number[]>([])
const [selectedPresetId, setSelectedPresetId] = React.useState<string | null>(null)
const [selectionMode, setSelectionMode] = React.useState<SelectionMode>("organization")
const [selectedOrgId, setSelectedOrgId] = React.useState<number | null>(null)
const [selectedTargetId, setSelectedTargetId] = React.useState<number | null>(null)
const [cronExpression, setCronExpression] = React.useState("0 2 * * *")
// Configuration state management
const [configuration, setConfiguration] = React.useState("")
const [isConfigEdited, setIsConfigEdited] = React.useState(false)
const [isYamlValid, setIsYamlValid] = React.useState(true)
const [showOverwriteConfirm, setShowOverwriteConfirm] = React.useState(false)
const [pendingConfigChange, setPendingConfigChange] = React.useState<string | null>(null)
React.useEffect(() => {
if (open) {
@@ -140,25 +163,65 @@ export function CreateScheduledScanDialog({
}, [open, presetOrganizationId, presetOrganizationName, presetTargetId, presetTargetName, t])
const targets: Target[] = targetsData?.targets || []
const engines: ScanEngine[] = enginesData || []
const engines = enginesData || []
const organizations: Organization[] = organizationsData?.organizations || []
// Get selected engines for display
const selectedEngines = React.useMemo(() => {
if (!engineIds.length || !engines.length) return []
return engines.filter(e => engineIds.includes(e.id))
}, [engineIds, engines])
const resetForm = () => {
setName("")
setEngineIds([])
setSelectedPresetId(null)
setSelectionMode("organization")
setSelectedOrgId(null)
setSelectedTargetId(null)
setCronExpression("0 2 * * *")
setConfiguration("")
setIsConfigEdited(false)
resetStep()
}
const handleEngineToggle = (engineId: number, checked: boolean) => {
if (checked) {
setEngineIds((prev) => [...prev, engineId])
// Handle configuration change from preset selector (may need confirmation)
const handlePresetConfigChange = React.useCallback((value: string) => {
if (isConfigEdited && configuration !== value) {
setPendingConfigChange(value)
setShowOverwriteConfirm(true)
} else {
setEngineIds((prev) => prev.filter((id) => id !== engineId))
setConfiguration(value)
setIsConfigEdited(false)
}
}, [isConfigEdited, configuration])
// Handle manual config editing
const handleManualConfigChange = React.useCallback((value: string) => {
setConfiguration(value)
setIsConfigEdited(true)
}, [])
const handleEngineIdsChange = React.useCallback((newEngineIds: number[]) => {
setEngineIds(newEngineIds)
}, [])
const handleOverwriteConfirm = () => {
if (pendingConfigChange !== null) {
setConfiguration(pendingConfigChange)
setIsConfigEdited(false)
}
setShowOverwriteConfirm(false)
setPendingConfigChange(null)
}
const handleOverwriteCancel = () => {
setShowOverwriteConfirm(false)
setPendingConfigChange(null)
}
const handleYamlValidationChange = (isValid: boolean) => {
setIsYamlValid(isValid)
}
const handleOpenChange = (isOpen: boolean) => {
@@ -177,11 +240,15 @@ export function CreateScheduledScanDialog({
const validateCurrentStep = (): boolean => {
if (hasPreset) {
switch (currentStep) {
case 1:
if (!name.trim()) { toast.error(t("form.taskNameRequired")); return false }
case 1: // Select engine
if (!selectedPresetId) { toast.error(t("form.scanEngineRequired")); return false }
if (engineIds.length === 0) { toast.error(t("form.scanEngineRequired")); return false }
return true
case 2:
case 2: // Edit config
if (!configuration.trim()) { toast.error(t("form.configurationRequired")); return false }
if (!isYamlValid) { toast.error(t("form.yamlInvalid")); return false }
return true
case 3: // Schedule
const parts = cronExpression.trim().split(/\s+/)
if (parts.length !== 5) { toast.error(t("form.cronRequired")); return false }
return true
@@ -190,19 +257,25 @@ export function CreateScheduledScanDialog({
}
switch (currentStep) {
case 1:
case 1: // Basic info
if (!name.trim()) { toast.error(t("form.taskNameRequired")); return false }
if (engineIds.length === 0) { toast.error(t("form.scanEngineRequired")); return false }
return true
case 2: return true
case 3:
case 2: // Select target
if (selectionMode === "organization") {
if (!selectedOrgId) { toast.error(t("toast.selectOrganization")); return false }
} else {
if (!selectedTargetId) { toast.error(t("toast.selectTarget")); return false }
}
return true
case 4:
case 3: // Select engine
if (!selectedPresetId) { toast.error(t("form.scanEngineRequired")); return false }
if (engineIds.length === 0) { toast.error(t("form.scanEngineRequired")); return false }
return true
case 4: // Edit config
if (!configuration.trim()) { toast.error(t("form.configurationRequired")); return false }
if (!isYamlValid) { toast.error(t("form.yamlInvalid")); return false }
return true
case 5: // Schedule
const cronParts = cronExpression.trim().split(/\s+/)
if (cronParts.length !== 5) { toast.error(t("form.cronRequired")); return false }
return true
@@ -216,7 +289,9 @@ export function CreateScheduledScanDialog({
if (!validateCurrentStep()) return
const request: CreateScheduledScanRequest = {
name: name.trim(),
configuration: configuration.trim(),
engineIds: engineIds,
engineNames: selectedEngines.map(e => e.name),
cronExpression: cronExpression.trim(),
}
if (selectionMode === "organization" && selectedOrgId) {
@@ -262,82 +337,30 @@ export function CreateScheduledScanDialog({
return (
<Dialog open={open} onOpenChange={handleOpenChange}>
<DialogContent className="max-w-2xl max-h-[90vh] overflow-hidden flex flex-col">
<DialogHeader>
<DialogTitle>{t("createTitle")}</DialogTitle>
<DialogDescription>{t("createDesc")}</DialogDescription>
<DialogContent className="max-w-[900px] p-0 gap-0">
<DialogHeader className="px-6 pt-6 pb-4">
<div className="flex items-center justify-between">
<div>
<DialogTitle>{t("createTitle")}</DialogTitle>
<DialogDescription className="mt-1">{t("createDesc")}</DialogDescription>
</div>
{/* Step indicator */}
<div className="text-sm text-muted-foreground mr-8">
{t("stepIndicator", { current: currentStep, total: totalSteps })}
</div>
</div>
</DialogHeader>
<div className="flex items-center justify-between px-2 py-4">
{steps.map((step, index) => (
<React.Fragment key={step.id}>
<div className="flex flex-col items-center gap-2">
<div className={cn(
"flex h-10 w-10 items-center justify-center rounded-full border-2 transition-colors",
currentStep > step.id ? "border-primary bg-primary text-primary-foreground"
: currentStep === step.id ? "border-primary text-primary"
: "border-muted text-muted-foreground"
)}>
{currentStep > step.id ? <IconCheck className="h-5 w-5" /> : <step.icon className="h-5 w-5" />}
</div>
<span className={cn("text-xs font-medium", currentStep >= step.id ? "text-foreground" : "text-muted-foreground")}>
{step.title}
</span>
</div>
{index < steps.length - 1 && (
<div className={cn("h-0.5 flex-1 mx-2", currentStep > step.id ? "bg-primary" : "bg-muted")} />
)}
</React.Fragment>
))}
</div>
<Separator />
<div className="flex-1 overflow-y-auto py-4 px-1">
{currentStep === 1 && (
<div className="space-y-6">
<div className="border-t h-[480px] overflow-hidden">
{/* Step 1: Basic Info + Scan Mode */}
{currentStep === 1 && !hasPreset && (
<div className="p-6 space-y-6 overflow-y-auto h-full">
<div className="space-y-2">
<Label htmlFor="name">{t("form.taskName")} *</Label>
<Input id="name" placeholder={t("form.taskNamePlaceholder")} value={name} onChange={(e) => setName(e.target.value)} />
<p className="text-xs text-muted-foreground">{t("form.taskNameDesc")}</p>
</div>
<div className="space-y-2">
<Label>{t("form.scanEngine")} *</Label>
{engineIds.length > 0 && (
<p className="text-xs text-muted-foreground">{t("form.selectedEngines", { count: engineIds.length })}</p>
)}
<div className="border rounded-md p-3 max-h-[200px] overflow-y-auto space-y-2">
{engines.length === 0 ? (
<p className="text-sm text-muted-foreground">{t("form.noEngine")}</p>
) : (
engines.map((engine) => (
<label
key={engine.id}
htmlFor={`engine-${engine.id}`}
className={cn(
"flex items-center gap-3 p-2 rounded-lg cursor-pointer transition-all",
engineIds.includes(engine.id)
? "bg-primary/10 border border-primary/30"
: "hover:bg-muted/50 border border-transparent"
)}
>
<Checkbox
id={`engine-${engine.id}`}
checked={engineIds.includes(engine.id)}
onCheckedChange={(checked) => handleEngineToggle(engine.id, checked as boolean)}
/>
<span className="text-sm">{engine.name}</span>
</label>
))
)}
</div>
<p className="text-xs text-muted-foreground">{t("form.scanEngineDesc")}</p>
</div>
</div>
)}
{currentStep === 2 && !hasPreset && (
<div className="space-y-6">
<Separator />
<div className="space-y-3">
<Label>{t("form.selectScanMode")}</Label>
<div className="grid grid-cols-2 gap-4">
@@ -364,15 +387,16 @@ export function CreateScheduledScanDialog({
{selectionMode === "target" && <IconCheck className="h-5 w-5 text-primary" />}
</div>
</div>
<p className="text-sm text-muted-foreground">
{selectionMode === "organization" ? t("form.organizationScanHint") : t("form.targetScanHint")}
</p>
</div>
<p className="text-sm text-muted-foreground">
{selectionMode === "organization" ? t("form.organizationScanHint") : t("form.targetScanHint")}
</p>
</div>
)}
{currentStep === 3 && !hasPreset && (
<div className="space-y-4">
{/* Step 2: Select Target (Organization or Target) */}
{currentStep === 2 && !hasPreset && (
<div className="p-6 space-y-4 overflow-y-auto h-full">
{selectionMode === "organization" ? (
<>
<Label>{t("form.selectOrganization")}</Label>
@@ -451,8 +475,34 @@ export function CreateScheduledScanDialog({
</div>
)}
{/* Step 3 (full) / Step 1 (preset): Select Engine */}
{((currentStep === 3 && !hasPreset) || (currentStep === 1 && hasPreset)) && engines.length > 0 && (
<EnginePresetSelector
engines={engines}
selectedEngineIds={engineIds}
selectedPresetId={selectedPresetId}
onPresetChange={setSelectedPresetId}
onEngineIdsChange={handleEngineIdsChange}
onConfigurationChange={handlePresetConfigChange}
disabled={isPending}
/>
)}
{/* Step 4 (full) / Step 2 (preset): Edit Configuration */}
{((currentStep === 4 && !hasPreset) || (currentStep === 2 && hasPreset)) && (
<div className="space-y-6">
<ScanConfigEditor
configuration={configuration}
onChange={handleManualConfigChange}
onValidationChange={handleYamlValidationChange}
selectedEngines={selectedEngines}
isConfigEdited={isConfigEdited}
disabled={isPending}
/>
)}
{/* Step 5 (full) / Step 3 (preset): Schedule Settings */}
{((currentStep === 5 && !hasPreset) || (currentStep === 3 && hasPreset)) && (
<div className="p-6 space-y-6 overflow-y-auto h-full">
<div className="space-y-2">
<Label>{t("form.cronExpression")} *</Label>
<Input placeholder={t("form.cronPlaceholder")} value={cronExpression} onChange={(e) => setCronExpression(e.target.value)} className="font-mono" />
@@ -489,9 +539,7 @@ export function CreateScheduledScanDialog({
)}
</div>
<Separator />
<div className="flex justify-between pt-4">
<div className="px-6 py-4 border-t flex justify-between">
<Button variant="outline" onClick={goToPrevStep} disabled={currentStep === 1}>
<IconChevronLeft className="h-4 w-4 mr-1" />{t("buttons.previous")}
</Button>
@@ -504,6 +552,26 @@ export function CreateScheduledScanDialog({
)}
</div>
</DialogContent>
{/* Overwrite confirmation dialog */}
<AlertDialog open={showOverwriteConfirm} onOpenChange={setShowOverwriteConfirm}>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>{t("overwriteConfirm.title")}</AlertDialogTitle>
<AlertDialogDescription>
{t("overwriteConfirm.description")}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel onClick={handleOverwriteCancel}>
{t("overwriteConfirm.cancel")}
</AlertDialogCancel>
<AlertDialogAction onClick={handleOverwriteConfirm}>
{t("overwriteConfirm.confirm")}
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
</Dialog>
)
}

View File

@@ -36,6 +36,7 @@ const converter = new AnsiToHtml({
export function AnsiLogViewer({ content, className }: AnsiLogViewerProps) {
const containerRef = useRef<HTMLPreElement>(null)
const isAtBottomRef = useRef(true) // 跟踪用户是否在底部
// 将 ANSI 转换为 HTML
const htmlContent = useMemo(() => {
@@ -43,9 +44,24 @@ export function AnsiLogViewer({ content, className }: AnsiLogViewerProps) {
return converter.toHtml(content)
}, [content])
// 自动滚动到底部
// 监听滚动事件,检测用户是否在底部
useEffect(() => {
if (containerRef.current) {
const container = containerRef.current
if (!container) return
const handleScroll = () => {
const { scrollTop, scrollHeight, clientHeight } = container
// 允许 30px 的容差,认为在底部附近
isAtBottomRef.current = scrollHeight - scrollTop - clientHeight < 30
}
container.addEventListener('scroll', handleScroll)
return () => container.removeEventListener('scroll', handleScroll)
}, [])
// 只有用户在底部时才自动滚动
useEffect(() => {
if (containerRef.current && isAtBottomRef.current) {
containerRef.current.scrollTop = containerRef.current.scrollHeight
}
}, [htmlContent])

View File

@@ -0,0 +1,194 @@
"use client"
import React, { useState, useCallback, useEffect } from "react"
import Editor from "@monaco-editor/react"
import * as yaml from "js-yaml"
import { AlertCircle } from "lucide-react"
import { useColorTheme } from "@/hooks/use-color-theme"
import { useTranslations } from "next-intl"
import { cn } from "@/lib/utils"
interface YamlEditorProps {
value: string
onChange: (value: string) => void
placeholder?: string
disabled?: boolean
height?: string
className?: string
onValidationChange?: (isValid: boolean, error?: { message: string; line?: number; column?: number }) => void
}
/**
* YAML Editor component with Monaco Editor
* Provides VSCode-level editing experience with syntax highlighting and validation
*/
export function YamlEditor({
value,
onChange,
placeholder,
disabled = false,
height = "100%",
className,
onValidationChange,
}: YamlEditorProps) {
const t = useTranslations("common.yamlEditor")
const { currentTheme } = useColorTheme()
const [shouldMount, setShouldMount] = useState(false)
const [yamlError, setYamlError] = useState<{ message: string; line?: number; column?: number } | null>(null)
// Delay mounting to avoid Monaco hitTest error on rapid container changes
useEffect(() => {
const timer = setTimeout(() => setShouldMount(true), 50)
return () => clearTimeout(timer)
}, [])
// Check for duplicate keys in YAML content
const checkDuplicateKeys = useCallback((content: string): { key: string; line: number } | null => {
const lines = content.split('\n')
const keyStack: { indent: number; keys: Set<string> }[] = [{ indent: -1, keys: new Set() }]
for (let i = 0; i < lines.length; i++) {
const line = lines[i]
// Skip empty lines and comments
if (!line.trim() || line.trim().startsWith('#')) continue
// Match top-level keys (no leading whitespace, ends with colon)
const topLevelMatch = line.match(/^([a-zA-Z_][a-zA-Z0-9_-]*):\s*(?:#.*)?$/)
if (topLevelMatch) {
const key = topLevelMatch[1]
const currentLevel = keyStack[0]
if (currentLevel.keys.has(key)) {
return { key, line: i + 1 }
}
currentLevel.keys.add(key)
}
}
return null
}, [])
// Validate YAML syntax
const validateYaml = useCallback((content: string) => {
if (!content.trim()) {
setYamlError(null)
onValidationChange?.(true)
return true
}
// First check for duplicate keys
const duplicateKey = checkDuplicateKeys(content)
if (duplicateKey) {
const errorInfo = {
message: t("duplicateKey", { key: duplicateKey.key }),
line: duplicateKey.line,
column: 1,
}
setYamlError(errorInfo)
onValidationChange?.(false, errorInfo)
return false
}
try {
yaml.load(content)
setYamlError(null)
onValidationChange?.(true)
return true
} catch (error) {
const yamlException = error as yaml.YAMLException
const errorInfo = {
message: yamlException.message,
line: yamlException.mark?.line ? yamlException.mark.line + 1 : undefined,
column: yamlException.mark?.column ? yamlException.mark.column + 1 : undefined,
}
setYamlError(errorInfo)
onValidationChange?.(false, errorInfo)
return false
}
}, [onValidationChange, checkDuplicateKeys, t])
// Handle editor content change
const handleEditorChange = useCallback((newValue: string | undefined) => {
const content = newValue || ""
onChange(content)
validateYaml(content)
}, [onChange, validateYaml])
// Handle editor mount
const handleEditorDidMount = useCallback(() => {
// Validate initial content
validateYaml(value)
}, [validateYaml, value])
return (
<div className={cn("flex flex-col h-full", className)}>
{/* Monaco Editor */}
<div className={cn("flex-1 overflow-hidden", yamlError ? 'border-destructive' : '')}>
{shouldMount ? (
<Editor
height={height}
defaultLanguage="yaml"
value={value}
onChange={handleEditorChange}
onMount={handleEditorDidMount}
theme={currentTheme.isDark ? "vs-dark" : "light"}
options={{
minimap: { enabled: false },
fontSize: 12,
lineNumbers: "off",
wordWrap: "off",
scrollBeyondLastLine: false,
automaticLayout: true,
tabSize: 2,
insertSpaces: true,
formatOnPaste: true,
formatOnType: true,
folding: true,
foldingStrategy: "indentation",
showFoldingControls: "mouseover",
bracketPairColorization: {
enabled: true,
},
padding: {
top: 8,
bottom: 8,
},
readOnly: disabled,
placeholder: placeholder,
}}
loading={
<div className="flex items-center justify-center h-full bg-muted/30">
<div className="flex flex-col items-center gap-2">
<div className="h-6 w-6 animate-spin rounded-full border-2 border-primary border-t-transparent" />
<p className="text-xs text-muted-foreground">{t("loading")}</p>
</div>
</div>
}
/>
) : (
<div className="flex items-center justify-center h-full bg-muted/30">
<div className="flex flex-col items-center gap-2">
<div className="h-6 w-6 animate-spin rounded-full border-2 border-primary border-t-transparent" />
<p className="text-xs text-muted-foreground">{t("loading")}</p>
</div>
</div>
)}
</div>
{/* Error message display */}
{yamlError && (
<div className="flex items-start gap-2 p-2 bg-destructive/10 border-t border-destructive/20">
<AlertCircle className="h-3.5 w-3.5 text-destructive mt-0.5 flex-shrink-0" />
<div className="flex-1 text-xs">
<p className="font-medium text-destructive">
{yamlError.line && yamlError.column
? t("errorLocation", { line: yamlError.line, column: yamlError.column })
: t("syntaxError")}
</p>
<p className="text-muted-foreground truncate">{yamlError.message}</p>
</div>
</div>
)}
</div>
)
}

View File

@@ -0,0 +1,106 @@
/**
* 扫描日志轮询 Hook
*
* 功能:
* - 初始加载获取全部日志
* - 增量轮询获取新日志3s 间隔)
* - 扫描结束后停止轮询
*/
import { useState, useEffect, useCallback, useRef } from 'react'
import { getScanLogs, type ScanLog } from '@/services/scan.service'
interface UseScanLogsOptions {
scanId: number
enabled?: boolean
pollingInterval?: number // 默认 3000ms
}
interface UseScanLogsReturn {
logs: ScanLog[]
loading: boolean
refetch: () => void
}
export function useScanLogs({
scanId,
enabled = true,
pollingInterval = 3000,
}: UseScanLogsOptions): UseScanLogsReturn {
const [logs, setLogs] = useState<ScanLog[]>([])
const [loading, setLoading] = useState(false)
const lastLogId = useRef<number | null>(null)
const isMounted = useRef(true)
const fetchLogs = useCallback(async (incremental = false) => {
if (!enabled || !isMounted.current) return
setLoading(true)
try {
const params: { limit: number; afterId?: number } = { limit: 200 }
if (incremental && lastLogId.current !== null) {
params.afterId = lastLogId.current
}
const response = await getScanLogs(scanId, params)
const newLogs = response.results
if (!isMounted.current) return
if (newLogs.length > 0) {
// 使用 ID 作为游标ID 是唯一且自增的,避免时间戳重复导致的重复日志
lastLogId.current = newLogs[newLogs.length - 1].id
if (incremental) {
// 按 ID 去重,防止 React Strict Mode 或竞态条件导致的重复
setLogs(prev => {
const existingIds = new Set(prev.map(l => l.id))
const uniqueNewLogs = newLogs.filter(l => !existingIds.has(l.id))
return uniqueNewLogs.length > 0 ? [...prev, ...uniqueNewLogs] : prev
})
} else {
setLogs(newLogs)
}
}
} catch (error) {
console.error('Failed to fetch scan logs:', error)
} finally {
if (isMounted.current) {
setLoading(false)
}
}
}, [scanId, enabled])
// 初始加载
useEffect(() => {
isMounted.current = true
if (enabled) {
// 重置状态
setLogs([])
lastLogId.current = null
fetchLogs(false)
}
return () => {
isMounted.current = false
}
}, [scanId, enabled])
// 轮询
useEffect(() => {
if (!enabled) return
const interval = setInterval(() => {
fetchLogs(true) // 增量查询
}, pollingInterval)
return () => clearInterval(interval)
}, [enabled, pollingInterval, fetchLogs])
const refetch = useCallback(() => {
setLogs([])
lastLogId.current = null
fetchLogs(false)
}, [fetchLogs])
return { logs, loading, refetch }
}

View File

@@ -80,3 +80,14 @@ export function parseEngineCapabilities(configuration: string): string[] {
return []
}
}
/**
* Merge multiple engine configurations into a single YAML string
* Simply concatenates configurations with separators
*/
export function mergeEngineConfigurations(configurations: string[]): string {
const validConfigs = configurations.filter(c => c && c.trim())
if (validConfigs.length === 0) return ""
if (validConfigs.length === 1) return validConfigs[0]
return validConfigs.join("\n\n# ---\n\n")
}

View File

@@ -175,6 +175,13 @@
"website": "Website",
"description": "Description"
},
"yamlEditor": {
"syntaxError": "Syntax Error",
"syntaxValid": "Syntax Valid",
"errorLocation": "Line {line}, Column {column}",
"loading": "Loading editor...",
"duplicateKey": "Duplicate key '{key}' found. Later values will override earlier ones. Please remove duplicates."
},
"theme": {
"switchToLight": "Switch to light mode",
"switchToDark": "Switch to dark mode",
@@ -654,7 +661,40 @@
"noConfig": "No config",
"initiating": "Initiating...",
"startScan": "Start Scan",
"selectedCount": "{count} engines selected"
"selectedCount": "{count} engines selected",
"configTitle": "Scan Configuration",
"configEdited": "Edited",
"stepIndicator": "Step {current}/{total}",
"back": "Back",
"next": "Next",
"steps": {
"selectEngine": "Select Engine",
"editConfig": "Edit Config"
},
"presets": {
"title": "Quick Select",
"fullScan": "Full Scan",
"fullScanDesc": "Complete security assessment covering asset discovery to vulnerability detection",
"recon": "Reconnaissance",
"reconDesc": "Discover and identify target assets including subdomains, ports, sites and fingerprints",
"vulnScan": "Vulnerability Scan",
"vulnScanDesc": "Detect security vulnerabilities on known assets",
"custom": "Custom",
"customDesc": "Manually select engine combination",
"customHint": "Click to manually select engines",
"selectHint": "Please select a scan preset",
"selectEngines": "Select Engines",
"enginesCount": "engines",
"capabilities": "Capabilities",
"usedEngines": "Used Engines",
"noCapabilities": "Please select engines"
},
"overwriteConfirm": {
"title": "Overwrite Configuration",
"description": "You have manually edited the configuration. Changing engines will overwrite your changes. Continue?",
"cancel": "Cancel",
"confirm": "Overwrite"
}
},
"cron": {
"everyMinute": "Every minute",
@@ -697,6 +737,8 @@
"status": "Status",
"errorReason": "Error Reason",
"totalProgress": "Total Progress",
"tab_stages": "Stages",
"tab_logs": "Logs",
"status_running": "Scanning",
"status_cancelled": "Cancelled",
"status_completed": "Completed",
@@ -736,10 +778,13 @@
"createDesc": "Configure scheduled scan task and set execution plan",
"editTitle": "Edit Scheduled Scan",
"editDesc": "Modify scheduled scan task configuration",
"stepIndicator": "Step {current}/{total}",
"steps": {
"basicInfo": "Basic Info",
"scanMode": "Scan Mode",
"selectTarget": "Select Target",
"selectEngine": "Select Engine",
"editConfig": "Edit Config",
"scheduleSettings": "Schedule Settings"
},
"form": {
@@ -749,8 +794,14 @@
"taskNameRequired": "Please enter task name",
"scanEngine": "Scan Engine",
"scanEnginePlaceholder": "Select scan engine",
"scanEngineDesc": "Select the scan engine configuration to use",
"scanEngineDesc": "Select engine to auto-fill configuration, or edit directly",
"scanEngineRequired": "Please select a scan engine",
"configuration": "Scan Configuration",
"configurationPlaceholder": "Enter YAML scan configuration...",
"configurationDesc": "YAML format scan configuration, select engine to auto-fill or edit manually",
"configurationRequired": "Please enter scan configuration",
"yamlInvalid": "Invalid YAML configuration, please check syntax",
"configEdited": "Edited",
"selectScanMode": "Select Scan Mode",
"organizationScan": "Organization Scan",
"organizationScanDesc": "Select organization, dynamically fetch all targets at execution",
@@ -782,6 +833,8 @@
"organizationModeHint": "In organization scan mode, all targets under this organization will be dynamically fetched at execution",
"noAvailableTarget": "No available targets",
"noEngine": "No engines available",
"noConfig": "No config",
"capabilitiesCount": "{count} capabilities",
"selected": "Selected",
"selectedEngines": "{count} engines selected"
},
@@ -803,7 +856,14 @@
},
"toast": {
"selectOrganization": "Please select an organization",
"selectTarget": "Please select a scan target"
"selectTarget": "Please select a scan target",
"configConflict": "Configuration conflict"
},
"overwriteConfirm": {
"title": "Overwrite Configuration",
"description": "You have manually edited the configuration. Changing engines will overwrite your changes. Do you want to continue?",
"cancel": "Cancel",
"confirm": "Overwrite"
}
},
"engine": {
@@ -1405,6 +1465,8 @@
"initiateScanFailed": "Failed to initiate scan",
"noScansCreated": "No scan tasks were created",
"unknownError": "Unknown error",
"noEngineSelected": "Please select at least one scan engine",
"emptyConfig": "Scan configuration cannot be empty",
"engineNameRequired": "Please enter engine name",
"configRequired": "Configuration content is required",
"yamlSyntaxError": "YAML syntax error",
@@ -1709,7 +1771,8 @@
},
"step1Title": "Enter Targets",
"step2Title": "Select Engines",
"step3Title": "Confirm",
"step3Title": "Edit Config",
"stepIndicator": "Step {current}/{total}",
"step1Hint": "Enter scan targets in the left input box, one per line",
"step": "Step {current}/{total} · {title}",
"targetPlaceholder": "Enter one target per line, supported formats:\n\nDomain: example.com, sub.example.com\nIP Address: 192.168.1.1, 10.0.0.1\nCIDR: 192.168.1.0/24, 10.0.0.0/8\nURL: https://example.com/api/v1",
@@ -1737,10 +1800,19 @@
"andMore": "{count} more...",
"selectedEngines": "Selected Engines",
"confirmSummary": "Will scan {targetCount} targets with {engineCount} engines",
"configTitle": "Scan Configuration",
"configEdited": "Edited",
"overwriteConfirm": {
"title": "Overwrite Configuration",
"description": "You have manually edited the configuration. Changing engines will overwrite your changes. Continue?",
"cancel": "Cancel",
"confirm": "Overwrite"
},
"toast": {
"noValidTarget": "Please enter at least one valid target",
"hasInvalidInputs": "{count} invalid inputs, please fix before continuing",
"selectEngine": "Please select a scan engine",
"emptyConfig": "Scan configuration cannot be empty",
"getEnginesFailed": "Failed to get engine list",
"createFailed": "Failed to create scan task",
"createSuccess": "Created {count} scan tasks",

View File

@@ -175,6 +175,13 @@
"website": "官网",
"description": "描述"
},
"yamlEditor": {
"syntaxError": "语法错误",
"syntaxValid": "语法正确",
"errorLocation": "第 {line} 行,第 {column} 列",
"loading": "加载编辑器...",
"duplicateKey": "发现重复的配置项 '{key}',后面的配置会覆盖前面的,请删除重复项"
},
"theme": {
"switchToLight": "切换到亮色模式",
"switchToDark": "切换到暗色模式",
@@ -654,7 +661,40 @@
"noConfig": "无配置",
"initiating": "发起中...",
"startScan": "开始扫描",
"selectedCount": "已选择 {count} 个引擎"
"selectedCount": "已选择 {count} 个引擎",
"configTitle": "扫描配置",
"configEdited": "已编辑",
"stepIndicator": "步骤 {current}/{total}",
"back": "上一步",
"next": "下一步",
"steps": {
"selectEngine": "选择引擎",
"editConfig": "编辑配置"
},
"presets": {
"title": "推荐组合",
"fullScan": "全量扫描",
"fullScanDesc": "完整的安全评估,覆盖资产发现到漏洞检测的全部流程",
"recon": "信息收集",
"reconDesc": "发现和识别目标资产,包括子域名、端口、站点和指纹",
"vulnScan": "漏洞扫描",
"vulnScanDesc": "对已知资产进行安全漏洞检测",
"custom": "自定义",
"customDesc": "手动选择引擎组合",
"customHint": "点击选择后手动勾选引擎",
"selectHint": "请选择一个扫描方案",
"selectEngines": "选择引擎",
"enginesCount": "个引擎",
"capabilities": "涉及能力",
"usedEngines": "使用引擎",
"noCapabilities": "请选择引擎"
},
"overwriteConfirm": {
"title": "覆盖配置确认",
"description": "您已手动编辑过配置,切换引擎将覆盖当前配置。是否继续?",
"cancel": "取消",
"confirm": "确认覆盖"
}
},
"cron": {
"everyMinute": "每分钟",
@@ -697,6 +737,8 @@
"status": "状态",
"errorReason": "错误原因",
"totalProgress": "总进度",
"tab_stages": "阶段",
"tab_logs": "日志",
"status_running": "扫描中",
"status_cancelled": "已取消",
"status_completed": "已完成",
@@ -736,10 +778,13 @@
"createDesc": "配置定时扫描任务,设置执行计划",
"editTitle": "编辑定时扫描",
"editDesc": "修改定时扫描任务配置",
"stepIndicator": "步骤 {current}/{total}",
"steps": {
"basicInfo": "基本信息",
"scanMode": "扫描模式",
"selectTarget": "选择目标",
"selectEngine": "选择引擎",
"editConfig": "编辑配置",
"scheduleSettings": "调度设置"
},
"form": {
@@ -749,8 +794,14 @@
"taskNameRequired": "请输入任务名称",
"scanEngine": "扫描引擎",
"scanEnginePlaceholder": "选择扫描引擎",
"scanEngineDesc": "选择要使用的扫描引擎配置",
"scanEngineDesc": "选择引擎可快速填充配置,也可直接编辑配置",
"scanEngineRequired": "请选择扫描引擎",
"configuration": "扫描配置",
"configurationPlaceholder": "请输入 YAML 格式的扫描配置...",
"configurationDesc": "YAML 格式的扫描配置,可选择引擎自动填充或手动编辑",
"configurationRequired": "请输入扫描配置",
"yamlInvalid": "YAML 配置格式错误,请检查语法",
"configEdited": "已编辑",
"selectScanMode": "选择扫描模式",
"organizationScan": "组织扫描",
"organizationScanDesc": "选择组织,执行时动态获取其下所有目标",
@@ -782,6 +833,8 @@
"organizationModeHint": "组织扫描模式下,执行时将动态获取该组织下所有目标",
"noAvailableTarget": "暂无可用目标",
"noEngine": "暂无可用引擎",
"noConfig": "无配置",
"capabilitiesCount": "{count} 项能力",
"selected": "已选择",
"selectedEngines": "已选择 {count} 个引擎"
},
@@ -803,7 +856,14 @@
},
"toast": {
"selectOrganization": "请选择一个组织",
"selectTarget": "请选择一个扫描目标"
"selectTarget": "请选择一个扫描目标",
"configConflict": "配置冲突"
},
"overwriteConfirm": {
"title": "覆盖配置确认",
"description": "您已手动编辑了配置,切换引擎将覆盖当前配置。确定要继续吗?",
"cancel": "取消",
"confirm": "确定覆盖"
}
},
"engine": {
@@ -1405,6 +1465,8 @@
"initiateScanFailed": "发起扫描失败",
"noScansCreated": "未创建任何扫描任务",
"unknownError": "未知错误",
"noEngineSelected": "请选择至少一个扫描引擎",
"emptyConfig": "扫描配置不能为空",
"engineNameRequired": "请输入引擎名称",
"configRequired": "配置内容不能为空",
"yamlSyntaxError": "YAML 语法错误",
@@ -1709,7 +1771,8 @@
},
"step1Title": "输入目标",
"step2Title": "选择引擎",
"step3Title": "确认",
"step3Title": "编辑配置",
"stepIndicator": "步骤 {current}/{total}",
"step1Hint": "在左侧输入框中输入扫描目标,每行一个",
"step": "步骤 {current}/{total} · {title}",
"targetPlaceholder": "每行输入一个目标,支持以下格式:\n\n域名: example.com, sub.example.com\nIP地址: 192.168.1.1, 10.0.0.1\nCIDR网段: 192.168.1.0/24, 10.0.0.0/8\nURL: https://example.com/api/v1",
@@ -1737,10 +1800,19 @@
"andMore": "还有 {count} 个...",
"selectedEngines": "已选引擎",
"confirmSummary": "将使用 {engineCount} 个引擎扫描 {targetCount} 个目标",
"configTitle": "扫描配置",
"configEdited": "已编辑",
"overwriteConfirm": {
"title": "覆盖配置确认",
"description": "您已手动编辑过配置,切换引擎将覆盖当前配置。是否继续?",
"cancel": "取消",
"confirm": "确认覆盖"
},
"toast": {
"noValidTarget": "请输入至少一个有效目标",
"hasInvalidInputs": "存在 {count} 个无效输入,请修正后继续",
"selectEngine": "请选择扫描引擎",
"emptyConfig": "扫描配置不能为空",
"getEnginesFailed": "获取引擎列表失败",
"createFailed": "创建扫描任务失败",
"createSuccess": "已创建 {count} 个扫描任务",

View File

@@ -113,3 +113,40 @@ export async function getScanStatistics(): Promise<ScanStatistics> {
const res = await api.get<ScanStatistics>('/scans/statistics/')
return res.data
}
/**
* Scan log entry type
*/
export interface ScanLog {
id: number
level: 'info' | 'warning' | 'error'
content: string
createdAt: string
}
/**
* Get scan logs response type
*/
export interface GetScanLogsResponse {
results: ScanLog[]
hasMore: boolean
}
/**
* Get scan logs params type
*/
export interface GetScanLogsParams {
afterId?: number
limit?: number
}
/**
* Get scan logs
* @param scanId - Scan ID
* @param params - Query parameters (afterId for cursor, limit for max results)
* @returns Scan logs with hasMore indicator
*/
export async function getScanLogs(scanId: number, params?: GetScanLogsParams): Promise<GetScanLogsResponse> {
const res = await api.get<GetScanLogsResponse>(`/scans/${scanId}/logs/`, { params })
return res.data
}

View File

@@ -42,34 +42,17 @@ export class SearchService {
/**
* 导出搜索结果为 CSV
* GET /api/assets/search/export/
*
* 使用浏览器原生下载,支持显示下载进度
*/
static async exportCSV(query: string, assetType: AssetType): Promise<void> {
const queryParams = new URLSearchParams()
queryParams.append('q', query)
queryParams.append('asset_type', assetType)
const response = await api.get(
`/assets/search/export/?${queryParams.toString()}`,
{ responseType: 'blob' }
)
// 从响应头获取文件名
const contentDisposition = response.headers?.['content-disposition']
let filename = `search_${assetType}_${new Date().toISOString().slice(0, 10)}.csv`
if (contentDisposition) {
const match = contentDisposition.match(/filename="?([^"]+)"?/)
if (match) filename = match[1]
}
// 创建下载链接
const blob = new Blob([response.data as BlobPart], { type: 'text/csv;charset=utf-8' })
const url = URL.createObjectURL(blob)
const link = document.createElement('a')
link.href = url
link.download = filename
document.body.appendChild(link)
link.click()
document.body.removeChild(link)
URL.revokeObjectURL(url)
// 直接打开下载链接,使用浏览器原生下载管理器
// 这样可以显示下载进度,且不会阻塞页面
const downloadUrl = `/api/assets/search/export/?${queryParams.toString()}`
window.open(downloadUrl, '_blank')
}
}

View File

@@ -82,7 +82,9 @@ export interface GetScansResponse {
export interface InitiateScanRequest {
organizationId?: number // Organization ID (choose one)
targetId?: number // Target ID (choose one)
configuration: string // YAML configuration string (required)
engineIds: number[] // Scan engine ID list (required)
engineNames: string[] // Engine name list (required)
}
/**
@@ -90,7 +92,9 @@ export interface InitiateScanRequest {
*/
export interface QuickScanRequest {
targets: { name: string }[] // Target list
configuration: string // YAML configuration string (required)
engineIds: number[] // Scan engine ID list (required)
engineNames: string[] // Engine name list (required)
}
/**

View File

@@ -31,7 +31,9 @@ export interface ScheduledScan {
// Create scheduled scan request (organizationId and targetId are mutually exclusive)
export interface CreateScheduledScanRequest {
name: string
engineIds: number[] // Engine ID list
configuration: string // YAML configuration string (required)
engineIds: number[] // Engine ID list (required)
engineNames: string[] // Engine name list (required)
organizationId?: number // Organization scan mode
targetId?: number // Target scan mode
cronExpression: string // Cron expression, format: minute hour day month weekday
@@ -41,7 +43,9 @@ export interface CreateScheduledScanRequest {
// Update scheduled scan request (organizationId and targetId are mutually exclusive)
export interface UpdateScheduledScanRequest {
name?: string
engineIds?: number[] // Engine ID list
configuration?: string // YAML configuration string
engineIds?: number[] // Engine ID list (optional, for reference)
engineNames?: string[] // Engine name list (optional, for reference)
organizationId?: number // Organization scan mode (clears targetId when set)
targetId?: number // Target scan mode (clears organizationId when set)
cronExpression?: string