mirror of
https://github.com/yyhuni/xingrin.git
synced 2026-01-31 19:53:11 +08:00
Compare commits
31 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fe1579e7fb | ||
|
|
ef117d2245 | ||
|
|
39cea5a918 | ||
|
|
0d477ce269 | ||
|
|
1bb6e90c3d | ||
|
|
9004c77031 | ||
|
|
71de0b4b1b | ||
|
|
1ef1f9709e | ||
|
|
3323bd2a4f | ||
|
|
df602dd1ae | ||
|
|
372bab5267 | ||
|
|
bed80e4ba7 | ||
|
|
3b014bd04c | ||
|
|
5e60911cb3 | ||
|
|
5de7ea9dbc | ||
|
|
971641cdeb | ||
|
|
e5a74faf9f | ||
|
|
e9a58e89aa | ||
|
|
3d9d520dc7 | ||
|
|
8d814b5864 | ||
|
|
c16b7afabe | ||
|
|
fa55167989 | ||
|
|
55a2762c71 | ||
|
|
5532f1e63a | ||
|
|
948568e950 | ||
|
|
873b6893f1 | ||
|
|
dbb30f7c78 | ||
|
|
38eced3814 | ||
|
|
68fc7cee3b | ||
|
|
6e23824a45 | ||
|
|
a88cceb4f4 |
14
.github/workflows/docker-build.yml
vendored
14
.github/workflows/docker-build.yml
vendored
@@ -16,7 +16,7 @@ env:
|
||||
IMAGE_PREFIX: yyhuni
|
||||
|
||||
permissions:
|
||||
contents: write # 允许修改仓库内容
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
build:
|
||||
@@ -27,18 +27,23 @@ jobs:
|
||||
- image: xingrin-server
|
||||
dockerfile: docker/server/Dockerfile
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- image: xingrin-frontend
|
||||
dockerfile: docker/frontend/Dockerfile
|
||||
context: .
|
||||
platforms: linux/amd64 # ARM64 构建时 Next.js 在 QEMU 下会崩溃
|
||||
- image: xingrin-worker
|
||||
dockerfile: docker/worker/Dockerfile
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- image: xingrin-nginx
|
||||
dockerfile: docker/nginx/Dockerfile
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- image: xingrin-agent
|
||||
dockerfile: docker/agent/Dockerfile
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
|
||||
steps:
|
||||
- name: Checkout
|
||||
@@ -48,7 +53,6 @@ jobs:
|
||||
run: |
|
||||
echo "=== Before cleanup ==="
|
||||
df -h
|
||||
# 删除不需要的大型软件包
|
||||
sudo rm -rf /usr/share/dotnet
|
||||
sudo rm -rf /usr/local/lib/android
|
||||
sudo rm -rf /opt/ghc
|
||||
@@ -95,18 +99,20 @@ jobs:
|
||||
with:
|
||||
context: ${{ matrix.context }}
|
||||
file: ${{ matrix.dockerfile }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
platforms: ${{ matrix.platforms }}
|
||||
push: true
|
||||
tags: |
|
||||
${{ env.IMAGE_PREFIX }}/${{ matrix.image }}:${{ steps.version.outputs.VERSION }}
|
||||
${{ steps.version.outputs.IS_RELEASE == 'true' && format('{0}/{1}:latest', env.IMAGE_PREFIX, matrix.image) || '' }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
provenance: false
|
||||
sbom: false
|
||||
|
||||
# 所有镜像构建成功后,更新 VERSION 文件
|
||||
update-version:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build # 等待所有 build job 完成
|
||||
needs: build
|
||||
if: startsWith(github.ref, 'refs/tags/v')
|
||||
steps:
|
||||
- name: Checkout
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
---
|
||||
trigger: always_on
|
||||
---
|
||||
|
||||
1.后端网页应该是 8888 端口
|
||||
3.前端所有路由加上末尾斜杠,以匹配 django 的 DRF 规则
|
||||
4.网页测试可以用 curl
|
||||
8.所有前端 api 接口都应该写在@services 中,所有 type 类型都应该写在@types 中
|
||||
10.前端的加载等逻辑用 React Query来实现,自动管理
|
||||
17.所有业务操作的 toast 都放在 hook 中
|
||||
23.前端非必要不要采用window.location.href去跳转,而是用Next.js 客户端路由
|
||||
24.ui相关的都去调用mcp来看看有没有通用组件,美观的组件来实现
|
||||
@@ -1,85 +0,0 @@
|
||||
---
|
||||
trigger: manual
|
||||
description: 进行代码审查的时候,必须调用这个规则
|
||||
---
|
||||
|
||||
### **0. 逻辑正确性 & Bug 排查** *(最高优先级,必须手动推演)*
|
||||
|
||||
**目标**:不依赖测试,主动发现“代码能跑但结果错”的逻辑错误。
|
||||
|
||||
1. **手动推演关键路径**:
|
||||
- 选 2~3 个典型输入(含边界),**在脑中或纸上一步步推演代码执行流程**。
|
||||
- 输出是否符合预期?每一步变量变化是否正确?
|
||||
2. **常见逻辑 bug 检查**:
|
||||
- **off-by-one**:循环、数组索引、分页
|
||||
- **条件逻辑错误**:`and`/`or` 优先级、短路求值误用
|
||||
- **状态混乱**:变量未初始化、被意外覆盖
|
||||
- **算法偏差**:排序、搜索、二分查找的中点处理
|
||||
- **浮点精度**:是否误用 `==` 比较浮点数?
|
||||
3. **控制流审查**:
|
||||
- 所有 `if/else` 分支是否都覆盖?有无“不可达代码”?
|
||||
- `switch`/`match` 是否有 `default`?是否漏 case?
|
||||
- 异常路径会返回什么?是否遗漏 `finally` 清理?
|
||||
4. **业务逻辑一致性**:
|
||||
- 是否符合**业务规则**?(如“订单总额 = 商品价 × 数量 + 运费 - 折扣”)
|
||||
- 是否遗漏隐含约束?(如“用户只能评价已完成的订单”)
|
||||
|
||||
### **一、功能性 & 正确性** *(阻塞性问题必须修复)*
|
||||
|
||||
1. **需求符合度**:是否100%覆盖需求?遗漏/多余功能点?
|
||||
2. **边界条件**:
|
||||
- 输入:`null`、空、极值、非法格式
|
||||
- 集合:空、单元素、超大(如10⁶)
|
||||
- 循环:终止条件、off-by-one
|
||||
3. **错误处理**:
|
||||
- 异常捕获全面?失败路径有降级?
|
||||
- 错误信息清晰?不泄露栈迹?
|
||||
4. **并发安全**:
|
||||
- 竞态/死锁风险?共享资源是否同步?
|
||||
- 使用了`volatile`/`synchronized`/`Lock`/`atomic`?
|
||||
5. **单元测试**:
|
||||
- 覆盖率 ≥80%?包含正向/边界/异常用例?
|
||||
- 测试独立?无外部依赖?
|
||||
|
||||
### **二、代码质量与可读性**
|
||||
|
||||
1. **命名**:见名知意?遵循规范?
|
||||
2. **函数设计**:
|
||||
- **单一职责**?参数 ≤4?建议长度 <50行(视语言调整)
|
||||
- 可提取为工具函数?
|
||||
3. **结构与复杂度**:
|
||||
- 无重复代码?圈复杂度 <10?
|
||||
- 嵌套 ≤3层?使用卫语句提前返回
|
||||
4. **注释**:解释**为什么**而非**是什么**?复杂逻辑必注释
|
||||
5. **风格一致**:通过`Prettier`/`ESLint`/`Spotless`自动格式化
|
||||
|
||||
### **三、架构与设计**
|
||||
|
||||
1. **SOLID**:是否符合单一职责、开闭、依赖倒置?
|
||||
2. **依赖**:是否依赖接口而非实现?无循环依赖?
|
||||
3. **可测试性**:是否支持依赖注入?避免`new`硬编码
|
||||
4. **扩展性**:新增功能是否只需改一处?
|
||||
|
||||
### **四、性能优化**
|
||||
|
||||
- **N+1查询**?循环内IO/日志/分配?
|
||||
- 算法复杂度合理?(如O(n²)是否可优化)
|
||||
- 内存:无泄漏?大对象及时释放?缓存有失效?
|
||||
|
||||
### **五、其他**
|
||||
|
||||
1. **可维护性**:日志带上下文?修改后更干净?
|
||||
2. **兼容性**:API/数据库变更是否向后兼容?
|
||||
3. **依赖管理**:新库必要?许可证合规?
|
||||
|
||||
---
|
||||
|
||||
### **审查最佳实践**
|
||||
|
||||
- **小批次审查**:≤200行/次
|
||||
- **语气建议**:`“建议将函数拆分以提升可读性”` 而非 `“这个函数太长了”`
|
||||
- **自动化先行**:风格/空指针/安全扫描 → CI工具
|
||||
- **重点分级**:
|
||||
- 🛑 **阻塞**:功能错、安全漏洞
|
||||
- ⚠️ **必须改**:设计缺陷、性能瓶颈
|
||||
- 💡 **建议**:风格、命名、可读性
|
||||
@@ -1,195 +0,0 @@
|
||||
---
|
||||
trigger: always_on
|
||||
---
|
||||
|
||||
## 标准分层架构调用顺序
|
||||
|
||||
按照 **DDD(领域驱动设计)和清洁架构**原则,调用顺序应该是:
|
||||
|
||||
```
|
||||
HTTP请求 → Views → Tasks → Services → Repositories → Models
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 📊 完整的调用链路图
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ HTTP Request (前端) │
|
||||
└────────────────────────┬────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Views (HTTP 层) │
|
||||
│ - 参数验证 │
|
||||
│ - 权限检查 │
|
||||
│ - 调用 Tasks/Services │
|
||||
│ - 返回 HTTP 响应 │
|
||||
└────────────────────────┬────────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────┴────────────────┐
|
||||
↓ (异步) ↓ (同步)
|
||||
┌──────────────────┐ ┌──────────────────┐
|
||||
│ Tasks (任务层) │ │ Services (业务层)│
|
||||
│ - 异步执行 │ │ - 业务逻辑 │
|
||||
│ - 后台作业 │───────>│ - 事务管理 │
|
||||
│ - 通知发送 │ │ - 数据验证 │
|
||||
└──────────────────┘ └────────┬─────────┘
|
||||
↓
|
||||
┌──────────────────────┐
|
||||
│ Repositories (存储层) │
|
||||
│ - 数据访问 │
|
||||
│ - 查询封装 │
|
||||
│ - 批量操作 │
|
||||
└────────┬─────────────┘
|
||||
↓
|
||||
┌──────────────────────┐
|
||||
│ Models (模型层) │
|
||||
│ - ORM 定义 │
|
||||
│ - 数据结构 │
|
||||
│ - 关系映射 │
|
||||
└──────────────────────┘
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 🔄 具体调用示例
|
||||
|
||||
### **场景 1:同步删除(Views → Services → Repositories → Models)**
|
||||
|
||||
```python
|
||||
# 1. Views 层 (views.py)
|
||||
def some_sync_delete(self, request):
|
||||
# 参数验证
|
||||
target_ids = request.data.get('ids')
|
||||
|
||||
# 调用 Service 层
|
||||
service = TargetService()
|
||||
result = service.bulk_delete_targets(target_ids)
|
||||
|
||||
# 返回响应
|
||||
return Response({'message': 'deleted'})
|
||||
|
||||
# 2. Services 层 (services/target_service.py)
|
||||
class TargetService:
|
||||
def bulk_delete_targets(self, target_ids):
|
||||
# 业务逻辑验证
|
||||
logger.info("准备删除...")
|
||||
|
||||
# 调用 Repository 层
|
||||
deleted_count = self.repo.bulk_delete_by_ids(target_ids)
|
||||
|
||||
# 返回结果
|
||||
return deleted_count
|
||||
|
||||
# 3. Repositories 层 (repositories/django_target_repository.py)
|
||||
class DjangoTargetRepository:
|
||||
def bulk_delete_by_ids(self, target_ids):
|
||||
# 数据访问操作
|
||||
return Target.objects.filter(id__in=target_ids).delete()
|
||||
|
||||
# 4. Models 层 (models.py)
|
||||
class Target(models.Model):
|
||||
# ORM 定义
|
||||
name = models.CharField(...)
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### **场景 2:异步删除(Views → Tasks → Services → Repositories → Models)**
|
||||
|
||||
```python
|
||||
# 1. Views 层 (views.py)
|
||||
def destroy(self, request, *args, **kwargs):
|
||||
target = self.get_object()
|
||||
|
||||
# 调用 Tasks 层(异步)
|
||||
async_bulk_delete_targets([target.id], [target.name])
|
||||
|
||||
# 立即返回 202
|
||||
return Response(status=202)
|
||||
|
||||
# 2. Tasks 层 (tasks/target_tasks.py)
|
||||
def async_bulk_delete_targets(target_ids, target_names):
|
||||
def _delete():
|
||||
# 发送通知
|
||||
create_notification("删除中...")
|
||||
|
||||
# 调用 Service 层
|
||||
service = TargetService()
|
||||
result = service.bulk_delete_targets(target_ids)
|
||||
|
||||
# 发送完成通知
|
||||
create_notification("删除成功")
|
||||
|
||||
# 后台线程执行
|
||||
threading.Thread(target=_delete).start()
|
||||
|
||||
# 3. Services 层 (services/target_service.py)
|
||||
class TargetService:
|
||||
def bulk_delete_targets(self, target_ids):
|
||||
# 业务逻辑
|
||||
return self.repo.bulk_delete_by_ids(target_ids)
|
||||
|
||||
# 4. Repositories 层 (repositories/django_target_repository.py)
|
||||
class DjangoTargetRepository:
|
||||
def bulk_delete_by_ids(self, target_ids):
|
||||
# 数据访问
|
||||
return Target.objects.filter(id__in=target_ids).delete()
|
||||
|
||||
# 5. Models 层 (models.py)
|
||||
class Target(models.Model):
|
||||
# ORM 定义
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 📋 各层职责清单
|
||||
|
||||
| 层级 | 职责 | 不应该做 |
|
||||
| --- | --- | --- |
|
||||
| **Views** | HTTP 请求处理、参数验证、权限检查 | ❌ 直接访问 Models<br>❌ 业务逻辑 |
|
||||
| **Tasks** | 异步执行、后台作业、通知发送 | ❌ 直接访问 Models<br>❌ HTTP 响应 |
|
||||
| **Services** | 业务逻辑、事务管理、数据验证 | ❌ 直接写 SQL<br>❌ HTTP 相关 |
|
||||
| **Repositories** | 数据访问、查询封装、批量操作 | ❌ 业务逻辑<br>❌ 通知发送 |
|
||||
| **Models** | ORM 定义、数据结构、关系映射 | ❌ 业务逻辑<br>❌ 复杂查询 |
|
||||
|
||||
---
|
||||
|
||||
### ✅ 最佳实践原则
|
||||
|
||||
1. **单向依赖**:只能向下调用,不能向上调用
|
||||
|
||||
```
|
||||
Views → Tasks → Services → Repositories → Models
|
||||
(上层) (下层)
|
||||
|
||||
```
|
||||
|
||||
2. **层级隔离**:相邻层交互,禁止跨层
|
||||
- ✅ Views → Services
|
||||
- ✅ Tasks → Services
|
||||
- ✅ Services → Repositories
|
||||
- ❌ Views → Repositories(跨层)
|
||||
- ❌ Tasks → Models(跨层)
|
||||
3. **依赖注入**:通过构造函数注入依赖
|
||||
|
||||
```python
|
||||
class TargetService:
|
||||
def __init__(self):
|
||||
self.repo = DjangoTargetRepository() # 注入
|
||||
|
||||
```
|
||||
|
||||
4. **接口抽象**:使用 Protocol 定义接口
|
||||
|
||||
```python
|
||||
class TargetRepository(Protocol):
|
||||
def bulk_delete_by_ids(self, ids): ...
|
||||
|
||||
```
|
||||
@@ -30,17 +30,26 @@ def fetch_config_and_setup_django():
|
||||
print("[ERROR] 缺少 SERVER_URL 环境变量", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
config_url = f"{server_url}/api/workers/config/"
|
||||
# 通过环境变量声明 Worker 身份(本地/远程)
|
||||
is_local = os.environ.get("IS_LOCAL", "false").lower() == "true"
|
||||
config_url = f"{server_url}/api/workers/config/?is_local={str(is_local).lower()}"
|
||||
print(f"[CONFIG] 正在从配置中心获取配置: {config_url}")
|
||||
print(f"[CONFIG] IS_LOCAL={is_local}")
|
||||
try:
|
||||
resp = requests.get(config_url, timeout=10)
|
||||
resp.raise_for_status()
|
||||
config = resp.json()
|
||||
|
||||
# 数据库配置(必需)
|
||||
os.environ.setdefault("DB_HOST", config['db']['host'])
|
||||
os.environ.setdefault("DB_PORT", config['db']['port'])
|
||||
os.environ.setdefault("DB_NAME", config['db']['name'])
|
||||
os.environ.setdefault("DB_USER", config['db']['user'])
|
||||
db_host = config['db']['host']
|
||||
db_port = config['db']['port']
|
||||
db_name = config['db']['name']
|
||||
db_user = config['db']['user']
|
||||
|
||||
os.environ.setdefault("DB_HOST", db_host)
|
||||
os.environ.setdefault("DB_PORT", db_port)
|
||||
os.environ.setdefault("DB_NAME", db_name)
|
||||
os.environ.setdefault("DB_USER", db_user)
|
||||
os.environ.setdefault("DB_PASSWORD", config['db']['password'])
|
||||
|
||||
# Redis 配置
|
||||
@@ -52,7 +61,12 @@ def fetch_config_and_setup_django():
|
||||
os.environ.setdefault("ENABLE_COMMAND_LOGGING", str(config['logging']['enableCommandLogging']).lower())
|
||||
os.environ.setdefault("DEBUG", str(config['debug']))
|
||||
|
||||
print(f"[CONFIG] 从配置中心获取配置成功: {config_url}")
|
||||
print(f"[CONFIG] ✓ 配置获取成功")
|
||||
print(f"[CONFIG] DB_HOST: {db_host}")
|
||||
print(f"[CONFIG] DB_PORT: {db_port}")
|
||||
print(f"[CONFIG] DB_NAME: {db_name}")
|
||||
print(f"[CONFIG] DB_USER: {db_user}")
|
||||
print(f"[CONFIG] REDIS_URL: {config['redisUrl']}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"[ERROR] 获取配置失败: {config_url} - {e}", file=sys.stderr)
|
||||
|
||||
@@ -27,3 +27,10 @@ vulnerabilities_saved = Signal()
|
||||
# - worker_name: str Worker 名称
|
||||
# - message: str 失败原因
|
||||
worker_delete_failed = Signal()
|
||||
|
||||
# 所有 Worker 高负载信号
|
||||
# 参数:
|
||||
# - worker_name: str 被选中的 Worker 名称
|
||||
# - cpu: float CPU 使用率
|
||||
# - mem: float 内存使用率
|
||||
all_workers_high_load = Signal()
|
||||
|
||||
@@ -198,9 +198,27 @@ class NucleiTemplateRepoService:
|
||||
|
||||
# 判断是 clone 还是 pull
|
||||
if git_dir.is_dir():
|
||||
# 已有仓库,执行 pull
|
||||
cmd = ["git", "-C", str(local_path), "pull", "--ff-only"]
|
||||
action = "pull"
|
||||
# 检查远程地址是否变化
|
||||
current_remote = subprocess.run(
|
||||
["git", "-C", str(local_path), "remote", "get-url", "origin"],
|
||||
check=False,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
)
|
||||
current_url = current_remote.stdout.strip() if current_remote.returncode == 0 else ""
|
||||
|
||||
if current_url != obj.repo_url:
|
||||
# 远程地址变化,删除旧目录重新 clone
|
||||
logger.info("nuclei 模板仓库 %s 远程地址变化,重新 clone: %s -> %s", obj.id, current_url, obj.repo_url)
|
||||
shutil.rmtree(local_path)
|
||||
local_path.mkdir(parents=True, exist_ok=True)
|
||||
cmd = ["git", "clone", "--depth", "1", obj.repo_url, str(local_path)]
|
||||
action = "clone"
|
||||
else:
|
||||
# 已有仓库且地址未变,执行 pull
|
||||
cmd = ["git", "-C", str(local_path), "pull", "--ff-only"]
|
||||
action = "pull"
|
||||
else:
|
||||
# 新仓库,执行 clone
|
||||
if local_path.exists() and not local_path.is_dir():
|
||||
|
||||
@@ -8,13 +8,32 @@
|
||||
2. 选择负载最低的 Worker(可能是本地或远程)
|
||||
3. 本地 Worker:直接执行 docker run
|
||||
4. 远程 Worker:通过 SSH 执行 docker run
|
||||
5. 任务执行完自动销毁容器
|
||||
5. 任务执行完自动销毁容器(--rm)
|
||||
|
||||
镜像版本管理:
|
||||
- 版本锁定:使用 settings.IMAGE_TAG 确保 server 和 worker 版本一致
|
||||
- 预拉取策略:安装时预拉取镜像,执行时使用 --pull=missing
|
||||
- 本地开发:可通过 TASK_EXECUTOR_IMAGE 环境变量指向本地镜像
|
||||
|
||||
环境变量注入:
|
||||
- Worker 容器不使用 env_file,通过 docker run -e 动态注入
|
||||
- 只注入 SERVER_URL,容器启动后从配置中心获取完整配置
|
||||
- 本地 Worker:SERVER_URL = http://server:{port}(Docker 内部网络)
|
||||
- 远程 Worker:SERVER_URL = http://{public_host}:{port}(公网地址)
|
||||
|
||||
任务启动流程:
|
||||
1. Server 调用 execute_scan_flow() 等方法提交任务
|
||||
2. select_best_worker() 从 Redis 读取心跳数据,选择负载最低的节点
|
||||
3. _build_docker_command() 构建完整的 docker run 命令:
|
||||
- 设置网络(本地加入 Docker 网络,远程不指定)
|
||||
- 注入环境变量(-e SERVER_URL=...)
|
||||
- 挂载结果和日志目录(-v)
|
||||
- 指定执行脚本(python -m apps.scan.scripts.xxx)
|
||||
4. _execute_docker_command() 执行命令:
|
||||
- 本地:subprocess.run() 直接执行
|
||||
- 远程:paramiko SSH 执行
|
||||
5. docker run -d 立即返回容器 ID,任务在后台执行
|
||||
|
||||
特点:
|
||||
- 负载感知:任务优先分发到最空闲的机器
|
||||
- 统一调度:本地和远程 Worker 使用相同的选择逻辑
|
||||
@@ -134,11 +153,30 @@ class TaskDistributor:
|
||||
else:
|
||||
scored_workers.append((worker, score, cpu, mem))
|
||||
|
||||
# 降级策略:如果没有正常负载的,使用高负载中最低的
|
||||
# 降级策略:如果没有正常负载的,等待后重新选择
|
||||
if not scored_workers:
|
||||
if high_load_workers:
|
||||
logger.warning("所有 Worker 高负载,降级选择负载最低的")
|
||||
scored_workers = high_load_workers
|
||||
# 高负载时先等待,给系统喘息时间(默认 60 秒)
|
||||
high_load_wait = getattr(settings, 'HIGH_LOAD_WAIT_SECONDS', 60)
|
||||
logger.warning("所有 Worker 高负载,等待 %d 秒后重试...", high_load_wait)
|
||||
time.sleep(high_load_wait)
|
||||
|
||||
# 重新选择(递归调用,可能负载已降下来)
|
||||
# 为避免无限递归,这里直接使用高负载中最低的
|
||||
high_load_workers.sort(key=lambda x: x[1])
|
||||
best_worker, _, cpu, mem = high_load_workers[0]
|
||||
|
||||
# 发送高负载通知
|
||||
from apps.common.signals import all_workers_high_load
|
||||
all_workers_high_load.send(
|
||||
sender=self.__class__,
|
||||
worker_name=best_worker.name,
|
||||
cpu=cpu,
|
||||
mem=mem
|
||||
)
|
||||
|
||||
logger.info("选择 Worker: %s (CPU: %.1f%%, MEM: %.1f%%)", best_worker.name, cpu, mem)
|
||||
return best_worker
|
||||
else:
|
||||
logger.warning("没有可用的 Worker")
|
||||
return None
|
||||
@@ -202,8 +240,16 @@ class TaskDistributor:
|
||||
host_results_dir = settings.HOST_RESULTS_DIR # /opt/xingrin/results
|
||||
host_logs_dir = settings.HOST_LOGS_DIR # /opt/xingrin/logs
|
||||
|
||||
# 环境变量:只需 SERVER_URL,其他配置容器启动时从配置中心获取
|
||||
env_vars = [f"-e SERVER_URL={shlex.quote(server_url)}"]
|
||||
# 环境变量:SERVER_URL + IS_LOCAL,其他配置容器启动时从配置中心获取
|
||||
# IS_LOCAL 用于 Worker 向配置中心声明身份,决定返回的数据库地址
|
||||
# Prefect 本地模式配置:禁用 API server 和事件系统
|
||||
is_local_str = "true" if worker.is_local else "false"
|
||||
env_vars = [
|
||||
f"-e SERVER_URL={shlex.quote(server_url)}",
|
||||
f"-e IS_LOCAL={is_local_str}",
|
||||
"-e PREFECT_API_URL=", # 禁用 API server
|
||||
"-e PREFECT_LOGGING_EXTRA_LOGGERS=", # 禁用 Prefect 的额外内部日志器
|
||||
]
|
||||
|
||||
# 挂载卷
|
||||
volumes = [
|
||||
@@ -383,8 +429,20 @@ class TaskDistributor:
|
||||
Note:
|
||||
engine_config 由 Flow 内部通过 scan_id 查询数据库获取
|
||||
"""
|
||||
logger.info("="*60)
|
||||
logger.info("execute_scan_flow 开始")
|
||||
logger.info(" scan_id: %s", scan_id)
|
||||
logger.info(" target_name: %s", target_name)
|
||||
logger.info(" target_id: %s", target_id)
|
||||
logger.info(" scan_workspace_dir: %s", scan_workspace_dir)
|
||||
logger.info(" engine_name: %s", engine_name)
|
||||
logger.info(" docker_image: %s", self.docker_image)
|
||||
logger.info("="*60)
|
||||
|
||||
# 1. 等待提交间隔(后台线程执行,不阻塞 API)
|
||||
logger.info("等待提交间隔...")
|
||||
self._wait_for_submit_interval()
|
||||
logger.info("提交间隔等待完成")
|
||||
|
||||
# 2. 选择最佳 Worker
|
||||
worker = self.select_best_worker()
|
||||
|
||||
@@ -116,7 +116,7 @@ class NucleiTemplateRepoViewSet(viewsets.ModelViewSet):
|
||||
return Response({"message": str(exc)}, status=status.HTTP_400_BAD_REQUEST)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
logger.error("刷新 Nuclei 模板仓库失败: %s", exc, exc_info=True)
|
||||
return Response({"message": "刷新仓库失败"}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
|
||||
return Response({"message": f"刷新仓库失败: {exc}"}, status=status.HTTP_500_INTERNAL_SERVER_ERROR)
|
||||
|
||||
return Response({"message": "刷新成功", "result": result}, status=status.HTTP_200_OK)
|
||||
|
||||
|
||||
@@ -177,75 +177,16 @@ class WorkerNodeViewSet(viewsets.ModelViewSet):
|
||||
'created': created
|
||||
})
|
||||
|
||||
def _get_client_ip(self, request) -> str:
|
||||
"""获取客户端真实 IP"""
|
||||
x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR')
|
||||
if x_forwarded_for:
|
||||
return x_forwarded_for.split(',')[0].strip()
|
||||
return request.META.get('REMOTE_ADDR', '')
|
||||
|
||||
def _is_local_request(self, client_ip: str) -> bool:
|
||||
"""
|
||||
判断是否为本地请求(Docker 网络内部)
|
||||
|
||||
本地请求特征:
|
||||
- 来自 Docker 网络内部(172.x.x.x)
|
||||
- 来自 localhost(127.0.0.1)
|
||||
"""
|
||||
if not client_ip:
|
||||
return True # 无法获取 IP 时默认为本地
|
||||
|
||||
# Docker 默认网络段
|
||||
if client_ip.startswith('172.') or client_ip.startswith('10.'):
|
||||
return True
|
||||
|
||||
# localhost
|
||||
if client_ip in ('127.0.0.1', '::1', 'localhost'):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@action(detail=False, methods=['get'])
|
||||
def config(self, request):
|
||||
"""
|
||||
获取任务容器配置(配置中心 API)
|
||||
|
||||
Worker 启动时调用此接口获取完整配置,实现配置中心化管理。
|
||||
Worker 只需知道 SERVER_URL,其他配置由此 API 动态返回。
|
||||
Worker 通过 IS_LOCAL 环境变量声明身份,请求时带上 ?is_local=true/false 参数。
|
||||
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ 配置分发流程 │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Worker 启动 │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ GET /api/workers/config/ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────┐ │
|
||||
│ │ _get_client_ip() │ ← 获取请求来源 IP │
|
||||
│ │ (X-Forwarded-For │ (支持 Nginx 代理场景) │
|
||||
│ │ 或 REMOTE_ADDR) │ │
|
||||
│ └─────────┬───────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────┐ │
|
||||
│ │ _is_local_request() │ ← 判断是否为 Docker 网络内部请求 │
|
||||
│ │ 172.x.x.x / 10.x.x.x│ (Docker 默认网段) │
|
||||
│ │ 127.0.0.1 / ::1 │ (localhost) │
|
||||
│ └─────────┬───────────┘ │
|
||||
│ │ │
|
||||
│ ┌───────┴───────┐ │
|
||||
│ ▼ ▼ │
|
||||
│ 本地 Worker 远程 Worker │
|
||||
│ (Docker内) (公网访问) │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ db: postgres db: PUBLIC_HOST │
|
||||
│ redis: redis redis: PUBLIC_HOST:6379 │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
请求参数:
|
||||
is_local: true/false - Worker 是否为本地节点(Docker 网络内)
|
||||
|
||||
返回:
|
||||
{
|
||||
@@ -253,19 +194,29 @@ class WorkerNodeViewSet(viewsets.ModelViewSet):
|
||||
"redisUrl": "...",
|
||||
"paths": {"results": "...", "logs": "..."}
|
||||
}
|
||||
|
||||
配置逻辑:
|
||||
- 本地 Worker (is_local=true): db_host=postgres, redis=redis:6379
|
||||
- 远程 Worker (is_local=false): db_host=PUBLIC_HOST, redis=PUBLIC_HOST:6379
|
||||
"""
|
||||
from django.conf import settings
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# 判断请求来源:本地 Worker 还是远程 Worker
|
||||
# 本地 Worker 在 Docker 网络内,可以直接访问 postgres 服务
|
||||
# 远程 Worker 需要通过公网 IP 访问
|
||||
client_ip = self._get_client_ip(request)
|
||||
is_local_worker = self._is_local_request(client_ip)
|
||||
# 从请求参数获取 Worker 身份(由 Worker 自己声明)
|
||||
# 不再依赖 IP 判断,避免不同网络环境下的兼容性问题
|
||||
is_local_param = request.query_params.get('is_local', '').lower()
|
||||
is_local_worker = is_local_param == 'true'
|
||||
|
||||
# 根据请求来源返回不同的数据库地址
|
||||
db_host = settings.DATABASES['default']['HOST']
|
||||
_is_internal_db = db_host in ('postgres', 'localhost', '127.0.0.1')
|
||||
|
||||
logger.info(
|
||||
"Worker 配置请求 - is_local_param: %s, is_local_worker: %s, db_host: %s, is_internal_db: %s",
|
||||
is_local_param, is_local_worker, db_host, _is_internal_db
|
||||
)
|
||||
|
||||
if _is_internal_db:
|
||||
# 本地数据库场景
|
||||
if is_local_worker:
|
||||
@@ -274,13 +225,18 @@ class WorkerNodeViewSet(viewsets.ModelViewSet):
|
||||
worker_redis_url = 'redis://redis:6379/0'
|
||||
else:
|
||||
# 远程 Worker:通过公网 IP 访问
|
||||
worker_db_host = settings.PUBLIC_HOST
|
||||
worker_redis_url = f'redis://{settings.PUBLIC_HOST}:6379/0'
|
||||
public_host = settings.PUBLIC_HOST
|
||||
if public_host in ('server', 'localhost', '127.0.0.1'):
|
||||
logger.warning("远程 Worker 请求配置,但 PUBLIC_HOST=%s 不是有效的公网地址", public_host)
|
||||
worker_db_host = public_host
|
||||
worker_redis_url = f'redis://{public_host}:6379/0'
|
||||
else:
|
||||
# 远程数据库场景:所有 Worker 都用 DB_HOST
|
||||
worker_db_host = db_host
|
||||
worker_redis_url = getattr(settings, 'WORKER_REDIS_URL', 'redis://redis:6379/0')
|
||||
|
||||
logger.info("返回 Worker 配置 - db_host: %s, redis_url: %s", worker_db_host, worker_redis_url)
|
||||
|
||||
return Response({
|
||||
'db': {
|
||||
'host': worker_db_host,
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
import logging
|
||||
from django.dispatch import receiver
|
||||
|
||||
from apps.common.signals import vulnerabilities_saved, worker_delete_failed
|
||||
from apps.common.signals import vulnerabilities_saved, worker_delete_failed, all_workers_high_load
|
||||
from apps.scan.notifications import create_notification, NotificationLevel, NotificationCategory
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -80,3 +80,15 @@ def on_worker_delete_failed(sender, worker_name, message, **kwargs):
|
||||
category=NotificationCategory.SYSTEM
|
||||
)
|
||||
logger.warning("Worker 删除失败通知已发送 - worker=%s, message=%s", worker_name, message)
|
||||
|
||||
|
||||
@receiver(all_workers_high_load)
|
||||
def on_all_workers_high_load(sender, worker_name, cpu, mem, **kwargs):
|
||||
"""所有 Worker 高负载时的通知处理"""
|
||||
create_notification(
|
||||
title="系统负载较高",
|
||||
message=f"所有节点负载较高,已选择负载最低的节点 {worker_name}(CPU: {cpu:.1f}%, 内存: {mem:.1f}%)执行任务,扫描速度可能受影响",
|
||||
level=NotificationLevel.MEDIUM,
|
||||
category=NotificationCategory.SYSTEM
|
||||
)
|
||||
logger.warning("高负载通知已发送 - worker=%s, cpu=%.1f%%, mem=%.1f%%", worker_name, cpu, mem)
|
||||
|
||||
@@ -6,14 +6,32 @@
|
||||
必须在 Django 导入之前获取配置并设置环境变量。
|
||||
"""
|
||||
import argparse
|
||||
from apps.common.container_bootstrap import fetch_config_and_setup_django
|
||||
import sys
|
||||
import os
|
||||
import traceback
|
||||
|
||||
|
||||
def main():
|
||||
print("="*60)
|
||||
print("run_initiate_scan.py 启动")
|
||||
print(f" Python: {sys.version}")
|
||||
print(f" CWD: {os.getcwd()}")
|
||||
print(f" SERVER_URL: {os.environ.get('SERVER_URL', 'NOT SET')}")
|
||||
print("="*60)
|
||||
|
||||
# 1. 从配置中心获取配置并初始化 Django(必须在 Django 导入之前)
|
||||
fetch_config_and_setup_django()
|
||||
print("[1/4] 从配置中心获取配置...")
|
||||
try:
|
||||
from apps.common.container_bootstrap import fetch_config_and_setup_django
|
||||
fetch_config_and_setup_django()
|
||||
print("[1/4] ✓ 配置获取成功")
|
||||
except Exception as e:
|
||||
print(f"[1/4] ✗ 配置获取失败: {e}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
# 2. 解析命令行参数
|
||||
print("[2/4] 解析命令行参数...")
|
||||
parser = argparse.ArgumentParser(description="执行扫描初始化 Flow")
|
||||
parser.add_argument("--scan_id", type=int, required=True, help="扫描任务 ID")
|
||||
parser.add_argument("--target_name", type=str, required=True, help="目标名称")
|
||||
@@ -23,21 +41,41 @@ def main():
|
||||
parser.add_argument("--scheduled_scan_name", type=str, default=None, help="定时扫描任务名称(可选)")
|
||||
|
||||
args = parser.parse_args()
|
||||
print(f"[2/4] ✓ 参数解析成功:")
|
||||
print(f" scan_id: {args.scan_id}")
|
||||
print(f" target_name: {args.target_name}")
|
||||
print(f" target_id: {args.target_id}")
|
||||
print(f" scan_workspace_dir: {args.scan_workspace_dir}")
|
||||
print(f" engine_name: {args.engine_name}")
|
||||
print(f" scheduled_scan_name: {args.scheduled_scan_name}")
|
||||
|
||||
# 3. 现在可以安全导入 Django 相关模块
|
||||
from apps.scan.flows.initiate_scan_flow import initiate_scan_flow
|
||||
print("[3/4] 导入 initiate_scan_flow...")
|
||||
try:
|
||||
from apps.scan.flows.initiate_scan_flow import initiate_scan_flow
|
||||
print("[3/4] ✓ 导入成功")
|
||||
except Exception as e:
|
||||
print(f"[3/4] ✗ 导入失败: {e}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
# 4. 执行 Flow
|
||||
result = initiate_scan_flow(
|
||||
scan_id=args.scan_id,
|
||||
target_name=args.target_name,
|
||||
target_id=args.target_id,
|
||||
scan_workspace_dir=args.scan_workspace_dir,
|
||||
engine_name=args.engine_name,
|
||||
scheduled_scan_name=args.scheduled_scan_name,
|
||||
)
|
||||
|
||||
print(f"Flow 执行完成: {result}")
|
||||
print("[4/4] 执行 initiate_scan_flow...")
|
||||
try:
|
||||
result = initiate_scan_flow(
|
||||
scan_id=args.scan_id,
|
||||
target_name=args.target_name,
|
||||
target_id=args.target_id,
|
||||
scan_workspace_dir=args.scan_workspace_dir,
|
||||
engine_name=args.engine_name,
|
||||
scheduled_scan_name=args.scheduled_scan_name,
|
||||
)
|
||||
print("[4/4] ✓ Flow 执行完成")
|
||||
print(f"结果: {result}")
|
||||
except Exception as e:
|
||||
print(f"[4/4] ✗ Flow 执行失败: {e}")
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -266,15 +266,26 @@ class ScanCreationService:
|
||||
Args:
|
||||
scan_data: 扫描任务数据列表
|
||||
"""
|
||||
logger.info("="*60)
|
||||
logger.info("开始分发扫描任务到 Workers - 数量: %d", len(scan_data))
|
||||
logger.info("="*60)
|
||||
|
||||
# 后台线程需要新的数据库连接
|
||||
connection.close()
|
||||
logger.info("已关闭旧数据库连接,准备获取新连接")
|
||||
|
||||
distributor = get_task_distributor()
|
||||
logger.info("TaskDistributor 初始化完成")
|
||||
|
||||
scan_repo = DjangoScanRepository()
|
||||
logger.info("ScanRepository 初始化完成")
|
||||
|
||||
for data in scan_data:
|
||||
scan_id = data['scan_id']
|
||||
logger.info("-"*40)
|
||||
logger.info("准备分发扫描任务 - Scan ID: %s, Target: %s", scan_id, data['target_name'])
|
||||
try:
|
||||
logger.info("调用 distributor.execute_scan_flow...")
|
||||
success, message, container_id, worker_id = distributor.execute_scan_flow(
|
||||
scan_id=scan_id,
|
||||
target_name=data['target_name'],
|
||||
@@ -284,20 +295,29 @@ class ScanCreationService:
|
||||
scheduled_scan_name=data.get('scheduled_scan_name'),
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"execute_scan_flow 返回 - success: %s, message: %s, container_id: %s, worker_id: %s",
|
||||
success, message, container_id, worker_id
|
||||
)
|
||||
|
||||
if success:
|
||||
if container_id:
|
||||
scan_repo.append_container_id(scan_id, container_id)
|
||||
logger.info("已记录 container_id: %s", container_id)
|
||||
if worker_id:
|
||||
scan_repo.update_worker(scan_id, worker_id)
|
||||
logger.info("已记录 worker_id: %s", worker_id)
|
||||
logger.info(
|
||||
"✓ 扫描任务已提交 - Scan ID: %s, Worker: %s",
|
||||
scan_id, worker_id
|
||||
)
|
||||
else:
|
||||
logger.error("execute_scan_flow 返回失败 - message: %s", message)
|
||||
raise Exception(message)
|
||||
|
||||
except Exception as e:
|
||||
logger.error("提交扫描任务失败 - Scan ID: %s, 错误: %s", scan_id, e)
|
||||
logger.exception("详细堆栈:")
|
||||
try:
|
||||
scan_repo.update_status(
|
||||
scan_id,
|
||||
|
||||
@@ -157,6 +157,51 @@ class ScanService:
|
||||
"""取消所有正在运行的阶段(委托给 ScanStateService)"""
|
||||
return self.state_service.cancel_running_stages(scan_id, final_status)
|
||||
|
||||
# TODO:待接入
|
||||
def add_command_to_scan(self, scan_id: int, stage_name: str, tool_name: str, command: str) -> bool:
|
||||
"""
|
||||
增量添加命令到指定扫描阶段
|
||||
|
||||
Args:
|
||||
scan_id: 扫描任务ID
|
||||
stage_name: 阶段名称(如 'subdomain_discovery', 'port_scan')
|
||||
tool_name: 工具名称
|
||||
command: 执行命令
|
||||
|
||||
Returns:
|
||||
bool: 是否成功添加
|
||||
"""
|
||||
try:
|
||||
scan = self.get_scan(scan_id, prefetch_relations=False)
|
||||
if not scan:
|
||||
logger.error(f"扫描任务不存在: {scan_id}")
|
||||
return False
|
||||
|
||||
stage_progress = scan.stage_progress or {}
|
||||
|
||||
# 确保指定阶段存在
|
||||
if stage_name not in stage_progress:
|
||||
stage_progress[stage_name] = {'status': 'running', 'commands': []}
|
||||
|
||||
# 确保 commands 列表存在
|
||||
if 'commands' not in stage_progress[stage_name]:
|
||||
stage_progress[stage_name]['commands'] = []
|
||||
|
||||
# 增量添加命令
|
||||
command_entry = f"{tool_name}: {command}"
|
||||
stage_progress[stage_name]['commands'].append(command_entry)
|
||||
|
||||
scan.stage_progress = stage_progress
|
||||
scan.save(update_fields=['stage_progress'])
|
||||
|
||||
command_count = len(stage_progress[stage_name]['commands'])
|
||||
logger.info(f"✓ 记录命令: {stage_name}.{tool_name} (总计: {command_count})")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"记录命令失败: {e}")
|
||||
return False
|
||||
|
||||
# ==================== 删除和控制方法(委托给 ScanControlService) ====================
|
||||
|
||||
def delete_scans_two_phase(self, scan_ids: List[int]) -> dict:
|
||||
|
||||
@@ -225,6 +225,13 @@ def _parse_and_validate_line(line: str) -> Optional[PortScanRecord]:
|
||||
ip = line_data.get('ip', '').strip()
|
||||
port = line_data.get('port')
|
||||
|
||||
logger.debug("解析到的主机名: %s, IP: %s, 端口: %s", host, ip, port)
|
||||
|
||||
if not host and ip:
|
||||
host = ip
|
||||
logger.debug("主机名为空,使用 IP 作为 host")
|
||||
|
||||
|
||||
# 步骤 4: 验证字段不为空
|
||||
if not host or not ip or port is None:
|
||||
logger.warning(
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
#!/bin/bash
|
||||
|
||||
# 目前采用github action自动版本构建,用
|
||||
# git tag v1.0.9
|
||||
# git push origin v1.0.9
|
||||
# ============================================
|
||||
# Docker Hub 镜像推送脚本
|
||||
# 用途:构建并推送所有服务镜像到 Docker Hub
|
||||
|
||||
@@ -101,6 +101,18 @@ services:
|
||||
# SSL 证书挂载(方便更新)
|
||||
- ./nginx/ssl:/etc/nginx/ssl:ro
|
||||
|
||||
# Worker:扫描任务执行容器(开发模式下构建)
|
||||
worker:
|
||||
build:
|
||||
context: ..
|
||||
dockerfile: docker/worker/Dockerfile
|
||||
image: docker-worker:${IMAGE_TAG:-latest}-dev
|
||||
restart: "no"
|
||||
volumes:
|
||||
- /opt/xingrin/results:/app/backend/results
|
||||
- /opt/xingrin/logs:/app/backend/logs
|
||||
command: echo "Worker image built for development"
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
|
||||
|
||||
@@ -27,10 +27,10 @@ check_docker() {
|
||||
|
||||
# ==================== Docker Compose 命令检测 ====================
|
||||
detect_compose_cmd() {
|
||||
if command -v docker-compose >/dev/null 2>&1; then
|
||||
COMPOSE_CMD="docker-compose"
|
||||
elif docker compose version >/dev/null 2>&1; then
|
||||
if docker compose version >/dev/null 2>&1; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
elif command -v docker-compose >/dev/null 2>&1; then
|
||||
COMPOSE_CMD="docker-compose"
|
||||
else
|
||||
log_error "未检测到 docker-compose 或 docker compose。"
|
||||
exit 1
|
||||
|
||||
97
docker/scripts/setup-swap.sh
Executable file
97
docker/scripts/setup-swap.sh
Executable file
@@ -0,0 +1,97 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Ubuntu/Debian 一键开启交换分区脚本
|
||||
# 用法: sudo ./setup-swap.sh [大小GB]
|
||||
# 示例: sudo ./setup-swap.sh 4 # 创建 4GB 交换分区
|
||||
# sudo ./setup-swap.sh # 默认创建与内存相同大小的交换分区
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# 颜色定义
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
|
||||
# 检查 root 权限
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
log_error "请使用 sudo 运行此脚本"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 检查是否已有交换分区
|
||||
CURRENT_SWAP_KB=$(grep SwapTotal /proc/meminfo | awk '{print $2}')
|
||||
CURRENT_SWAP_GB=$((CURRENT_SWAP_KB / 1024 / 1024))
|
||||
if [ "$CURRENT_SWAP_GB" -gt 0 ]; then
|
||||
log_warn "系统已有 ${CURRENT_SWAP_GB}GB 交换分区"
|
||||
swapon --show
|
||||
read -p "是否继续添加新的交换分区?(y/N) " -r
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
log_info "已取消"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# 获取系统内存大小(GB)
|
||||
TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
|
||||
TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))
|
||||
|
||||
# 确定交换分区大小
|
||||
if [ -n "$1" ]; then
|
||||
SWAP_SIZE_GB=$1
|
||||
else
|
||||
# 默认与内存相同,最小 1GB,最大 8GB
|
||||
SWAP_SIZE_GB=$TOTAL_MEM_GB
|
||||
[ "$SWAP_SIZE_GB" -lt 1 ] && SWAP_SIZE_GB=1
|
||||
[ "$SWAP_SIZE_GB" -gt 8 ] && SWAP_SIZE_GB=8
|
||||
fi
|
||||
|
||||
SWAP_FILE="/swapfile_xingrin"
|
||||
|
||||
log_info "系统内存: ${TOTAL_MEM_GB}GB"
|
||||
log_info "将创建 ${SWAP_SIZE_GB}GB 交换分区: $SWAP_FILE"
|
||||
|
||||
# 检查磁盘空间
|
||||
AVAILABLE_GB=$(df / | tail -1 | awk '{print int($4/1024/1024)}')
|
||||
if [ "$AVAILABLE_GB" -lt "$SWAP_SIZE_GB" ]; then
|
||||
log_error "磁盘空间不足!可用: ${AVAILABLE_GB}GB,需要: ${SWAP_SIZE_GB}GB"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 创建交换文件
|
||||
log_info "正在创建交换文件(可能需要几分钟)..."
|
||||
dd if=/dev/zero of=$SWAP_FILE bs=1G count=$SWAP_SIZE_GB status=progress
|
||||
|
||||
# 设置权限
|
||||
chmod 600 $SWAP_FILE
|
||||
|
||||
# 格式化为交换分区
|
||||
mkswap $SWAP_FILE
|
||||
|
||||
# 启用交换分区
|
||||
swapon $SWAP_FILE
|
||||
|
||||
# 添加到 fstab(开机自动挂载)
|
||||
if ! grep -q "$SWAP_FILE" /etc/fstab; then
|
||||
echo "$SWAP_FILE none swap sw 0 0" >> /etc/fstab
|
||||
log_info "已添加到 /etc/fstab,开机自动启用"
|
||||
fi
|
||||
|
||||
# 优化 swappiness(降低交换倾向,优先使用内存)
|
||||
SWAPPINESS=10
|
||||
if ! grep -q "vm.swappiness" /etc/sysctl.conf; then
|
||||
echo "vm.swappiness=$SWAPPINESS" >> /etc/sysctl.conf
|
||||
fi
|
||||
sysctl vm.swappiness=$SWAPPINESS >/dev/null
|
||||
|
||||
log_info "交换分区创建成功!"
|
||||
echo ""
|
||||
echo "当前交换分区状态:"
|
||||
swapon --show
|
||||
echo ""
|
||||
free -h
|
||||
@@ -42,10 +42,10 @@ if ! docker info >/dev/null 2>&1; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if command -v docker-compose >/dev/null 2>&1; then
|
||||
COMPOSE_CMD="docker-compose"
|
||||
elif docker compose version >/dev/null 2>&1; then
|
||||
if docker compose version >/dev/null 2>&1; then
|
||||
COMPOSE_CMD="docker compose"
|
||||
elif command -v docker-compose >/dev/null 2>&1; then
|
||||
COMPOSE_CMD="docker-compose"
|
||||
else
|
||||
echo -e "${RED}[ERROR]${NC} 未检测到 docker compose,请先安装"
|
||||
exit 1
|
||||
|
||||
@@ -79,20 +79,20 @@ ENV GOPATH=/root/go
|
||||
ENV PATH=/usr/local/go/bin:$PATH:$GOPATH/bin
|
||||
ENV GOPROXY=https://goproxy.cn,direct
|
||||
|
||||
# 5. 安装 uv(超快的 Python 包管理器)
|
||||
RUN pip install uv --break-system-packages
|
||||
|
||||
# 安装 Python 依赖(使用 uv 并行下载,速度快 10-100 倍)
|
||||
COPY backend/requirements.txt .
|
||||
RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
uv pip install --system -r requirements.txt --break-system-packages && \
|
||||
rm -f /usr/local/lib/python3.*/dist-packages/argparse.py && \
|
||||
rm -rf /usr/local/lib/python3.*/dist-packages/__pycache__/argparse*
|
||||
|
||||
COPY --from=go-builder /usr/local/go /usr/local/go
|
||||
COPY --from=go-builder /go/bin/* /usr/local/bin/
|
||||
COPY --from=go-builder /usr/local/bin/massdns /usr/local/bin/massdns
|
||||
|
||||
# 5. 安装 uv( Python 包管理器)并安装 Python 依赖
|
||||
COPY backend/requirements.txt .
|
||||
RUN pip install uv --break-system-packages && \
|
||||
uv pip install --system -r requirements.txt --break-system-packages && \
|
||||
rm -f /usr/local/lib/python3.*/dist-packages/argparse.py && \
|
||||
rm -rf /usr/local/lib/python3.*/dist-packages/__pycache__/argparse* && \
|
||||
rm -rf /root/.cache/uv && \
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# 6. 复制后端代码
|
||||
COPY backend /app/backend
|
||||
ENV PYTHONPATH=/app/backend
|
||||
|
||||
@@ -152,13 +152,13 @@ sequenceDiagram
|
||||
|
||||
### 本地开发测试
|
||||
```bash
|
||||
# docker/.env 中添加
|
||||
TASK_EXECUTOR_IMAGE=docker-agent:latest # 指向本地构建镜像
|
||||
# docker/.env 中添加(开发模式会自动设置)
|
||||
TASK_EXECUTOR_IMAGE=docker-worker:v1.1.0-dev # 指向本地构建镜像
|
||||
```
|
||||
|
||||
### 开发模式启动
|
||||
```bash
|
||||
# 使用本地构建镜像
|
||||
# 使用本地构建镜像(自动构建并标记为 ${VERSION}-dev)
|
||||
./install.sh --dev
|
||||
./start.sh --dev
|
||||
```
|
||||
@@ -238,7 +238,8 @@ curl -s https://hub.docker.com/v2/repositories/yyhuni/xingrin-worker/tags/
|
||||
4. ✅ 使用 `docker system prune` 清理旧镜像
|
||||
|
||||
### 开发调试
|
||||
1. ✅ 本地测试使用 `--dev` 模式
|
||||
1. ✅ 本地测试使用 `--dev` 模式(自动构建 `docker-worker:${VERSION}-dev`)
|
||||
2. ✅ 远程测试先推送测试版本到 Hub
|
||||
3. ✅ 生产环境避免使用 `latest` 标签
|
||||
4. ✅ 版本回滚通过修改 `IMAGE_TAG` 实现
|
||||
3. ✅ 生产环境避免使用 `latest` 标签,始终使用明确版本号
|
||||
4. ✅ 开发环境使用 `-dev` 后缀区分开发版本
|
||||
5. ✅ 版本回滚通过修改 `IMAGE_TAG` 实现
|
||||
@@ -78,21 +78,21 @@ export function createIPAddressColumns(params: {
|
||||
enableSorting: false,
|
||||
enableHiding: false,
|
||||
},
|
||||
// IP 地址列
|
||||
// IP 列
|
||||
{
|
||||
accessorKey: "ip",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="IP 地址" />
|
||||
<DataTableColumnHeader column={column} title="IP Address" />
|
||||
),
|
||||
cell: ({ row }) => (
|
||||
<TruncatedCell value={row.original.ip} maxLength="ip" mono />
|
||||
),
|
||||
},
|
||||
// 关联主机名列
|
||||
// host 列
|
||||
{
|
||||
accessorKey: "hosts",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="关联主机" />
|
||||
<DataTableColumnHeader column={column} title="Hosts" />
|
||||
),
|
||||
cell: ({ getValue }) => {
|
||||
const hosts = getValue<string[]>()
|
||||
@@ -107,7 +107,7 @@ export function createIPAddressColumns(params: {
|
||||
return (
|
||||
<div className="flex flex-col gap-1">
|
||||
{displayHosts.map((host, index) => (
|
||||
<span key={index} className="text-sm font-mono">{host}</span>
|
||||
<TruncatedCell key={index} value={host} maxLength="host" mono />
|
||||
))}
|
||||
{hasMore && (
|
||||
<Badge variant="secondary" className="text-xs w-fit">
|
||||
@@ -118,11 +118,11 @@ export function createIPAddressColumns(params: {
|
||||
)
|
||||
},
|
||||
},
|
||||
// 发现时间列
|
||||
// discoveredAt 列
|
||||
{
|
||||
accessorKey: "discoveredAt",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="发现时间" />
|
||||
<DataTableColumnHeader column={column} title="Discovered At" />
|
||||
),
|
||||
cell: ({ getValue }) => {
|
||||
const value = getValue<string | undefined>()
|
||||
@@ -133,7 +133,7 @@ export function createIPAddressColumns(params: {
|
||||
{
|
||||
accessorKey: "ports",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="开放端口" />
|
||||
<DataTableColumnHeader column={column} title="Open Ports" />
|
||||
),
|
||||
cell: ({ getValue }) => {
|
||||
const ports = getValue<number[]>()
|
||||
@@ -191,7 +191,7 @@ export function createIPAddressColumns(params: {
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="w-80 p-3">
|
||||
<div className="space-y-2">
|
||||
<h4 className="font-medium text-sm">所有开放端口 ({sortedPorts.length})</h4>
|
||||
<h4 className="font-medium text-sm">All Open Ports ({sortedPorts.length})</h4>
|
||||
<div className="flex flex-wrap gap-1 max-h-32 overflow-y-auto">
|
||||
{sortedPorts.map((port, index) => (
|
||||
<Badge
|
||||
|
||||
@@ -267,7 +267,7 @@ export const createTargetColumns = ({
|
||||
{
|
||||
accessorKey: "name",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="目标名称" />
|
||||
<DataTableColumnHeader column={column} title="Target Name" />
|
||||
),
|
||||
cell: ({ row }) => (
|
||||
<TargetNameCell
|
||||
@@ -282,7 +282,7 @@ export const createTargetColumns = ({
|
||||
{
|
||||
accessorKey: "type",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="类型" />
|
||||
<DataTableColumnHeader column={column} title="Type" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const type = row.getValue("type") as string | null
|
||||
|
||||
@@ -188,7 +188,7 @@ export const createEngineColumns = ({
|
||||
{
|
||||
accessorKey: "name",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="引擎名称" />
|
||||
<DataTableColumnHeader column={column} title="Engine Name" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const name = row.getValue("name") as string
|
||||
|
||||
@@ -180,7 +180,7 @@ export const createScheduledScanColumns = ({
|
||||
{
|
||||
accessorKey: "name",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="任务名称" />
|
||||
<DataTableColumnHeader column={column} title="Task Name" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const name = row.getValue("name") as string
|
||||
@@ -216,7 +216,7 @@ export const createScheduledScanColumns = ({
|
||||
{
|
||||
accessorKey: "engineName",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="扫描引擎" />
|
||||
<DataTableColumnHeader column={column} title="Scan Engine" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const engineName = row.getValue("engineName") as string
|
||||
@@ -231,7 +231,7 @@ export const createScheduledScanColumns = ({
|
||||
// Cron 表达式列
|
||||
{
|
||||
accessorKey: "cronExpression",
|
||||
header: "调度时间",
|
||||
header: "Cron Expression",
|
||||
cell: ({ row }) => {
|
||||
const cron = row.original.cronExpression
|
||||
return (
|
||||
@@ -251,7 +251,7 @@ export const createScheduledScanColumns = ({
|
||||
// 目标列(根据 scanMode 显示组织或目标)
|
||||
{
|
||||
accessorKey: "scanMode",
|
||||
header: "目标",
|
||||
header: "Target",
|
||||
cell: ({ row }) => {
|
||||
const scanMode = row.original.scanMode
|
||||
const organizationName = row.original.organizationName
|
||||
@@ -283,7 +283,7 @@ export const createScheduledScanColumns = ({
|
||||
{
|
||||
accessorKey: "isEnabled",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="状态" />
|
||||
<DataTableColumnHeader column={column} title="Status" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const isEnabled = row.getValue("isEnabled") as boolean
|
||||
@@ -308,7 +308,7 @@ export const createScheduledScanColumns = ({
|
||||
{
|
||||
accessorKey: "nextRunTime",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="下次执行" />
|
||||
<DataTableColumnHeader column={column} title="Next Run" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const nextRunTime = row.getValue("nextRunTime") as string | undefined
|
||||
@@ -324,7 +324,7 @@ export const createScheduledScanColumns = ({
|
||||
{
|
||||
accessorKey: "runCount",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="执行次数" />
|
||||
<DataTableColumnHeader column={column} title="Run Count" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const count = row.getValue("runCount") as number
|
||||
@@ -338,7 +338,7 @@ export const createScheduledScanColumns = ({
|
||||
{
|
||||
accessorKey: "lastRunTime",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="上次执行" />
|
||||
<DataTableColumnHeader column={column} title="Last Run" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const lastRunTime = row.getValue("lastRunTime") as string | undefined
|
||||
|
||||
@@ -9,7 +9,7 @@ import { useSystemLogs } from "@/hooks/use-system-logs"
|
||||
|
||||
export function SystemLogsView() {
|
||||
const { theme } = useTheme()
|
||||
const { data } = useSystemLogs({ lines: 200 })
|
||||
const { data } = useSystemLogs({ lines: 500 })
|
||||
|
||||
const content = useMemo(() => data?.content ?? "", [data?.content])
|
||||
|
||||
|
||||
@@ -100,7 +100,7 @@ export const createSubdomainColumns = ({
|
||||
{
|
||||
accessorKey: "discoveredAt",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="发现时间" />
|
||||
<DataTableColumnHeader column={column} title="Discovered At" />
|
||||
),
|
||||
cell: ({ getValue }) => {
|
||||
const value = getValue<string | undefined>()
|
||||
|
||||
@@ -95,7 +95,7 @@ export const commandColumns: ColumnDef<Command>[] = [
|
||||
{
|
||||
accessorKey: "displayName",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="名称" />
|
||||
<DataTableColumnHeader column={column} title="Name" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const displayName = row.getValue("displayName") as string
|
||||
@@ -136,7 +136,7 @@ export const commandColumns: ColumnDef<Command>[] = [
|
||||
{
|
||||
accessorKey: "tool",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="所属工具" />
|
||||
<DataTableColumnHeader column={column} title="Tool" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const tool = row.original.tool
|
||||
@@ -156,7 +156,7 @@ export const commandColumns: ColumnDef<Command>[] = [
|
||||
{
|
||||
accessorKey: "commandTemplate",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="命令模板" />
|
||||
<DataTableColumnHeader column={column} title="Command Template" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const template = row.getValue("commandTemplate") as string
|
||||
@@ -192,7 +192,7 @@ export const commandColumns: ColumnDef<Command>[] = [
|
||||
{
|
||||
accessorKey: "description",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="描述" />
|
||||
<DataTableColumnHeader column={column} title="Description" />
|
||||
),
|
||||
cell: ({ row }) => {
|
||||
const description = row.getValue("description") as string
|
||||
@@ -217,7 +217,7 @@ export const commandColumns: ColumnDef<Command>[] = [
|
||||
{
|
||||
accessorKey: "updatedAt",
|
||||
header: ({ column }) => (
|
||||
<DataTableColumnHeader column={column} title="更新时间" />
|
||||
<DataTableColumnHeader column={column} title="Updated At" />
|
||||
),
|
||||
cell: ({ row }) => (
|
||||
<div className="text-sm text-muted-foreground">
|
||||
|
||||
@@ -81,7 +81,7 @@ export function createVulnerabilityColumns({
|
||||
},
|
||||
{
|
||||
accessorKey: "vulnType",
|
||||
header: "类型",
|
||||
header: "Vuln Type",
|
||||
cell: ({ row }) => {
|
||||
const vulnType = row.getValue("vulnType") as string
|
||||
const vulnerability = row.original
|
||||
@@ -143,7 +143,7 @@ export function createVulnerabilityColumns({
|
||||
},
|
||||
{
|
||||
accessorKey: "discoveredAt",
|
||||
header: "发现时间",
|
||||
header: "Discovered At",
|
||||
cell: ({ row }) => {
|
||||
const discoveredAt = row.getValue("discoveredAt") as string
|
||||
return (
|
||||
|
||||
@@ -62,7 +62,7 @@ export function useUpdateNucleiRepo() {
|
||||
mutationFn: (data: {
|
||||
id: number
|
||||
repoUrl?: string
|
||||
}) => nucleiRepoApi.updateRepo(data.id, data),
|
||||
}) => nucleiRepoApi.updateRepo(data.id, { repoUrl: data.repoUrl }),
|
||||
onSuccess: (_data, variables) => {
|
||||
toast.success("仓库配置已更新")
|
||||
queryClient.invalidateQueries({ queryKey: ["nuclei-repos"] })
|
||||
|
||||
@@ -75,9 +75,9 @@ export const nucleiRepoApi = {
|
||||
return response.data
|
||||
},
|
||||
|
||||
/** 更新仓库 */
|
||||
/** 更新仓库(部分更新) */
|
||||
updateRepo: async (repoId: number, payload: UpdateRepoPayload): Promise<NucleiRepoResponse> => {
|
||||
const response = await api.put<NucleiRepoResponse>(`${BASE_URL}${repoId}/`, payload)
|
||||
const response = await api.patch<NucleiRepoResponse>(`${BASE_URL}${repoId}/`, payload)
|
||||
return response.data
|
||||
},
|
||||
|
||||
|
||||
109
install.sh
109
install.sh
@@ -75,7 +75,12 @@ fi
|
||||
|
||||
# 获取真实用户(通过 sudo 运行时 $SUDO_USER 是真实用户)
|
||||
REAL_USER="${SUDO_USER:-$USER}"
|
||||
REAL_HOME=$(getent passwd "$REAL_USER" | cut -d: -f6)
|
||||
# macOS 没有 getent,使用 dscl 或 ~$USER 替代
|
||||
if command -v getent &>/dev/null; then
|
||||
REAL_HOME=$(getent passwd "$REAL_USER" | cut -d: -f6)
|
||||
else
|
||||
REAL_HOME=$(eval echo "~$REAL_USER")
|
||||
fi
|
||||
|
||||
# 项目根目录
|
||||
ROOT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
@@ -110,13 +115,22 @@ generate_random_string() {
|
||||
fi
|
||||
}
|
||||
|
||||
# 跨平台 sed -i(兼容 macOS 和 Linux)
|
||||
sed_inplace() {
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
sed -i '' "$@"
|
||||
else
|
||||
sed -i "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
# 更新 .env 文件中的某个键
|
||||
update_env_var() {
|
||||
local file="$1"
|
||||
local key="$2"
|
||||
local value="$3"
|
||||
if grep -q "^$key=" "$file"; then
|
||||
sed -i -e "s|^$key=.*|$key=$value|" "$file"
|
||||
sed_inplace "s|^$key=.*|$key=$value|" "$file"
|
||||
else
|
||||
echo "$key=$value" >> "$file"
|
||||
fi
|
||||
@@ -126,7 +140,7 @@ update_env_var() {
|
||||
GENERATED_DB_PASSWORD=""
|
||||
GENERATED_DJANGO_KEY=""
|
||||
|
||||
# 生成自签 HTTPS 证书(无域名场景)
|
||||
# 生成自签 HTTPS 证书(使用容器,避免宿主机 openssl 兼容性问题)
|
||||
generate_self_signed_cert() {
|
||||
local ssl_dir="$DOCKER_DIR/nginx/ssl"
|
||||
local fullchain="$ssl_dir/fullchain.pem"
|
||||
@@ -139,14 +153,18 @@ generate_self_signed_cert() {
|
||||
|
||||
info "未检测到 HTTPS 证书,正在生成自签证书(localhost)..."
|
||||
mkdir -p "$ssl_dir"
|
||||
if openssl req -x509 -nodes -newkey rsa:2048 -days 365 \
|
||||
-keyout "$privkey" \
|
||||
-out "$fullchain" \
|
||||
|
||||
# 使用容器生成证书,避免依赖宿主机 openssl 版本
|
||||
if docker run --rm -v "$ssl_dir:/ssl" alpine/openssl \
|
||||
req -x509 -nodes -newkey rsa:2048 -days 365 \
|
||||
-keyout /ssl/privkey.pem \
|
||||
-out /ssl/fullchain.pem \
|
||||
-subj "/C=CN/ST=NA/L=NA/O=XingRin/CN=localhost" \
|
||||
-addext "subjectAltName=DNS:localhost,IP:127.0.0.1" >/dev/null 2>&1; then
|
||||
-addext "subjectAltName=DNS:localhost,IP:127.0.0.1" \
|
||||
>/dev/null 2>&1; then
|
||||
success "自签证书已生成: $ssl_dir"
|
||||
else
|
||||
warn "自签证书生成失败,请检查 openssl 是否可用,或手动放置证书到 $ssl_dir"
|
||||
warn "自签证书生成失败,请手动放置证书到 $ssl_dir"
|
||||
fi
|
||||
}
|
||||
|
||||
@@ -225,7 +243,7 @@ show_summary() {
|
||||
|
||||
step "[1/3] 检查基础命令"
|
||||
MISSING_CMDS=()
|
||||
for cmd in git curl jq openssl; do
|
||||
for cmd in git curl; do
|
||||
if ! command -v "$cmd" >/dev/null 2>&1; then
|
||||
MISSING_CMDS+=("$cmd")
|
||||
warn "未安装: $cmd"
|
||||
@@ -260,6 +278,46 @@ else
|
||||
success "docker compose 安装完成"
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# 交换分区配置(仅 Linux)
|
||||
# ==============================================================================
|
||||
if [[ "$OSTYPE" == "linux-gnu"* ]]; then
|
||||
# 获取当前内存大小(GB,纯 bash 算术)
|
||||
TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
|
||||
TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))
|
||||
|
||||
# 获取当前交换分区大小(GB)
|
||||
CURRENT_SWAP_KB=$(grep SwapTotal /proc/meminfo | awk '{print $2}')
|
||||
CURRENT_SWAP_GB=$((CURRENT_SWAP_KB / 1024 / 1024))
|
||||
|
||||
# 推荐交换分区大小(与内存相同,最小1G,最大8G)
|
||||
RECOMMENDED_SWAP=$TOTAL_MEM_GB
|
||||
[ "$RECOMMENDED_SWAP" -lt 1 ] && RECOMMENDED_SWAP=1
|
||||
[ "$RECOMMENDED_SWAP" -gt 8 ] && RECOMMENDED_SWAP=8
|
||||
|
||||
echo ""
|
||||
info "系统内存: ${TOTAL_MEM_GB}GB,当前交换分区: ${CURRENT_SWAP_GB}GB"
|
||||
|
||||
# 如果交换分区小于推荐值,提示用户
|
||||
if [ "$CURRENT_SWAP_GB" -lt "$RECOMMENDED_SWAP" ]; then
|
||||
echo -n -e "${BOLD}${CYAN}[?] 是否开启 ${RECOMMENDED_SWAP}GB 交换分区?可提升扫描稳定性 (Y/n) ${RESET}"
|
||||
read -r setup_swap
|
||||
echo
|
||||
if [[ ! $setup_swap =~ ^[Nn]$ ]]; then
|
||||
info "正在配置 ${RECOMMENDED_SWAP}GB 交换分区..."
|
||||
if bash "$ROOT_DIR/docker/scripts/setup-swap.sh" "$RECOMMENDED_SWAP"; then
|
||||
success "交换分区配置完成"
|
||||
else
|
||||
warn "交换分区配置失败,继续安装..."
|
||||
fi
|
||||
else
|
||||
info "跳过交换分区配置"
|
||||
fi
|
||||
else
|
||||
success "交换分区已足够: ${CURRENT_SWAP_GB}GB"
|
||||
fi
|
||||
fi
|
||||
|
||||
step "[3/3] 初始化配置"
|
||||
DOCKER_DIR="$ROOT_DIR/docker"
|
||||
if [ ! -d "$DOCKER_DIR" ]; then
|
||||
@@ -353,10 +411,10 @@ if [ -f "$DOCKER_DIR/.env.example" ]; then
|
||||
-c "CREATE DATABASE $prefect_db;" 2>/dev/null || true
|
||||
success "数据库准备完成"
|
||||
|
||||
sed -i "s/^DB_HOST=.*/DB_HOST=$db_host/" "$DOCKER_DIR/.env"
|
||||
sed -i "s/^DB_PORT=.*/DB_PORT=$db_port/" "$DOCKER_DIR/.env"
|
||||
sed -i "s/^DB_USER=.*/DB_USER=$db_user/" "$DOCKER_DIR/.env"
|
||||
sed -i "s/^DB_PASSWORD=.*/DB_PASSWORD=$db_password/" "$DOCKER_DIR/.env"
|
||||
sed_inplace "s/^DB_HOST=.*/DB_HOST=$db_host/" "$DOCKER_DIR/.env"
|
||||
sed_inplace "s/^DB_PORT=.*/DB_PORT=$db_port/" "$DOCKER_DIR/.env"
|
||||
sed_inplace "s/^DB_USER=.*/DB_USER=$db_user/" "$DOCKER_DIR/.env"
|
||||
sed_inplace "s/^DB_PASSWORD=.*/DB_PASSWORD=$db_password/" "$DOCKER_DIR/.env"
|
||||
success "已配置远程数据库: $db_user@$db_host:$db_port"
|
||||
else
|
||||
info "使用本地 PostgreSQL 容器"
|
||||
@@ -396,11 +454,28 @@ DOCKER_USER=$(grep "^DOCKER_USER=" "$DOCKER_DIR/.env" 2>/dev/null | cut -d= -f2)
|
||||
DOCKER_USER=${DOCKER_USER:-yyhuni}
|
||||
WORKER_IMAGE="${DOCKER_USER}/xingrin-worker:${APP_VERSION}"
|
||||
|
||||
info "正在拉取: $WORKER_IMAGE"
|
||||
if docker pull "$WORKER_IMAGE"; then
|
||||
success "Worker 镜像拉取完成"
|
||||
# 开发模式下构建本地 worker 镜像
|
||||
if [ "$DEV_MODE" = true ]; then
|
||||
info "开发模式:构建本地 Worker 镜像..."
|
||||
if docker compose -f "$DOCKER_DIR/docker-compose.dev.yml" build worker; then
|
||||
# 设置 TASK_EXECUTOR_IMAGE 环境变量指向本地构建的镜像(使用版本号-dev标识)
|
||||
update_env_var "$DOCKER_DIR/.env" "TASK_EXECUTOR_IMAGE" "docker-worker:${APP_VERSION}-dev"
|
||||
success "本地 Worker 镜像构建完成: docker-worker:${APP_VERSION}-dev"
|
||||
else
|
||||
error "开发模式下本地 Worker 镜像构建失败!"
|
||||
error "请检查构建错误并修复后重试"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
warn "Worker 镜像拉取失败,扫描时会自动重试拉取"
|
||||
info "正在拉取: $WORKER_IMAGE"
|
||||
if docker pull "$WORKER_IMAGE"; then
|
||||
success "Worker 镜像拉取完成"
|
||||
else
|
||||
error "Worker 镜像拉取失败,无法继续安装"
|
||||
error "请检查网络连接或 Docker Hub 访问权限"
|
||||
error "镜像地址: $WORKER_IMAGE"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
|
||||
49
uninstall.sh
49
uninstall.sh
@@ -80,12 +80,12 @@ if [[ $ans_stop =~ ^[Yy]$ ]]; then
|
||||
# 先强制停止并删除可能占用网络的容器(xingrin-agent 等)
|
||||
docker rm -f xingrin-agent xingrin-watchdog 2>/dev/null || true
|
||||
|
||||
# 停止两种模式的容器
|
||||
# 停止两种模式的容器(不带 -v,volume 在第 5 步单独处理)
|
||||
[ -f "docker-compose.yml" ] && ${COMPOSE_CMD} -f docker-compose.yml down 2>/dev/null || true
|
||||
[ -f "docker-compose.dev.yml" ] && ${COMPOSE_CMD} -f docker-compose.dev.yml down 2>/dev/null || true
|
||||
|
||||
# 手动删除网络(以防 compose 未能删除)
|
||||
docker network rm xingrin_network 2>/dev/null || true
|
||||
docker network rm xingrin_network docker_default 2>/dev/null || true
|
||||
|
||||
success "容器和网络已停止/删除(如存在)。"
|
||||
else
|
||||
@@ -156,19 +156,28 @@ ans_db=${ans_db:-Y}
|
||||
|
||||
if [[ $ans_db =~ ^[Yy]$ ]]; then
|
||||
info "尝试删除与 XingRin 相关的 Postgres 容器和数据卷..."
|
||||
# docker-compose 项目名为 docker,常见资源名如下(忽略不存在的情况):
|
||||
# - 容器: docker-postgres-1
|
||||
# - 数据卷: docker_postgres_data(对应 compose 中的 postgres_data 卷)
|
||||
docker rm -f docker-postgres-1 2>/dev/null || true
|
||||
docker volume rm docker_postgres_data 2>/dev/null || true
|
||||
success "本地 Postgres 容器及数据卷已尝试删除(不存在会自动忽略)。"
|
||||
# 删除可能的容器名(不同 compose 版本命名不同)
|
||||
docker rm -f docker-postgres-1 xingrin-postgres postgres 2>/dev/null || true
|
||||
|
||||
# 删除可能的 volume 名(取决于项目名和 compose 配置)
|
||||
# 先列出要删除的 volume
|
||||
for vol in postgres_data docker_postgres_data xingrin_postgres_data; do
|
||||
if docker volume inspect "$vol" >/dev/null 2>&1; then
|
||||
if docker volume rm "$vol" 2>/dev/null; then
|
||||
success "已删除 volume: $vol"
|
||||
else
|
||||
warn "无法删除 volume: $vol(可能正在被使用,请先停止所有容器)"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
success "本地 Postgres 数据卷清理完成。"
|
||||
else
|
||||
warn "已保留本地 Postgres 容器和 volume。"
|
||||
fi
|
||||
|
||||
step "[6/6] 是否删除与 XingRin 相关的 Docker 镜像?(y/N)"
|
||||
step "[6/6] 是否删除与 XingRin 相关的 Docker 镜像?(Y/n)"
|
||||
read -r ans_images
|
||||
ans_images=${ans_images:-N}
|
||||
ans_images=${ans_images:-Y}
|
||||
|
||||
if [[ $ans_images =~ ^[Yy]$ ]]; then
|
||||
info "正在删除 Docker 镜像..."
|
||||
@@ -199,9 +208,29 @@ if [[ $ans_images =~ ^[Yy]$ ]]; then
|
||||
fi
|
||||
|
||||
docker rmi redis:7-alpine 2>/dev/null || true
|
||||
|
||||
# 删除本地构建的开发镜像
|
||||
docker rmi docker-server docker-frontend docker-nginx docker-agent docker-worker 2>/dev/null || true
|
||||
docker rmi "docker-worker:${IMAGE_TAG}-dev" 2>/dev/null || true
|
||||
|
||||
success "Docker 镜像已删除(如存在)。"
|
||||
else
|
||||
warn "已保留 Docker 镜像。"
|
||||
fi
|
||||
|
||||
# 清理构建缓存(可选,会导致下次构建变慢)
|
||||
echo ""
|
||||
echo -n -e "${BOLD}${CYAN}[?] 是否清理 Docker 构建缓存?(y/N) ${RESET}"
|
||||
echo -e "${YELLOW}(清理后下次构建会很慢,一般不需要)${RESET}"
|
||||
read -r ans_cache
|
||||
ans_cache=${ans_cache:-N}
|
||||
|
||||
if [[ $ans_cache =~ ^[Yy]$ ]]; then
|
||||
info "清理 Docker 构建缓存..."
|
||||
docker builder prune -af 2>/dev/null || true
|
||||
success "构建缓存已清理。"
|
||||
else
|
||||
warn "已保留构建缓存(推荐)。"
|
||||
fi
|
||||
|
||||
success "卸载流程已完成。"
|
||||
|
||||
11
update.sh
11
update.sh
@@ -18,6 +18,15 @@
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
# 跨平台 sed -i(兼容 macOS 和 Linux)
|
||||
sed_inplace() {
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
sed -i '' "$@"
|
||||
else
|
||||
sed -i "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
# 解析参数判断模式
|
||||
DEV_MODE=false
|
||||
for arg in "$@"; do
|
||||
@@ -92,7 +101,7 @@ if [ -f "VERSION" ]; then
|
||||
if [ -n "$NEW_VERSION" ]; then
|
||||
# 更新 .env 中的 IMAGE_TAG(所有节点将使用此版本的镜像)
|
||||
if grep -q "^IMAGE_TAG=" "docker/.env"; then
|
||||
sed -i "s/^IMAGE_TAG=.*/IMAGE_TAG=$NEW_VERSION/" "docker/.env"
|
||||
sed_inplace "s/^IMAGE_TAG=.*/IMAGE_TAG=$NEW_VERSION/" "docker/.env"
|
||||
echo -e " ${GREEN}+${NC} 版本同步: IMAGE_TAG=$NEW_VERSION"
|
||||
else
|
||||
echo "IMAGE_TAG=$NEW_VERSION" >> "docker/.env"
|
||||
|
||||
Reference in New Issue
Block a user