Compare commits

..

45 Commits

Author SHA1 Message Date
2e7cf69512 增加说明 2025-05-10 17:23:06 +08:00
76240a12e6 增加联邦学习评价指标。bugfix: 修复训练模型参数聚合问题 2025-05-10 17:22:56 +08:00
98321aa7d5 训练模型配置 2025-05-10 16:19:00 +08:00
d39aa31651 删除无用文件 2025-05-10 16:18:37 +08:00
f127ae2852 增加联邦学习指标;fix:Pytorch 加载模型不匹配 2025-05-07 10:41:36 +08:00
3a65d89315 ignore .vscode 2025-05-07 10:41:06 +08:00
2a3e5b17e7 yolov8对比训练 2025-05-05 17:30:12 +08:00
c57c8f3552 忽略训练结果和pt文件 2025-05-05 17:29:58 +08:00
310131d876 文件结构调整 2025-05-05 17:03:41 +08:00
myh
ba4508507b 评价指标优化 2025-04-22 21:41:58 +08:00
myh
89d8f4c0df 添加评价指标 2025-04-22 16:35:29 +08:00
myh
d1ed958db5 删除实例模块 2025-04-22 16:35:19 +08:00
myh
abd033b831 训练命令 2025-04-22 16:35:10 +08:00
myh
69482e6a3f 修改参数,符合Linux路径要求 2025-04-22 14:56:45 +08:00
myh
9f827af58e 删除无用样例 2025-04-22 14:51:15 +08:00
myh
338a5e07e8 修改参数,使其符合训练数据集 2025-04-22 00:19:43 +08:00
myh
9d99b00e55 更改最小测试示例 2025-04-21 23:50:41 +08:00
myh
dd0e0d869c 忽略缓存文件 2025-04-21 23:50:12 +08:00
myh
8cd6df4527 数据集测试样例配置 2025-04-21 22:27:19 +08:00
myh
132ed64136 数据集测试样例 2025-04-21 22:26:52 +08:00
myh
be1e3627e7 评价指标测试 2025-04-21 17:51:38 +08:00
myh
d139f5afcf 评价指标 2025-04-21 17:51:32 +08:00
myh
428790ab91 项目重构 2025-04-20 16:36:41 +08:00
myh
65e10f3e7d 忽略模型文件 2025-04-20 15:25:05 +08:00
myh
960b66a692 python包新加__init__文件 2025-04-20 15:21:19 +08:00
myh
ef3d521e4a 测试数据集文件 2025-04-20 15:20:40 +08:00
myh
3b80f237fa 联邦平均算法:结合yolov8 2025-04-20 15:20:16 +08:00
myh
f320e79702 更改项目结构 2025-04-20 15:19:55 +08:00
myh
34a5247dd2 联邦学习示例项目:更改结构 2025-04-20 15:19:24 +08:00
myh
1930e1b96b 格式化 2025-04-19 20:31:12 +08:00
myh
5095dbe6c0 格式化代码 2025-04-19 20:09:42 +08:00
myh
554c7e6083 删除冗余算法 2025-04-19 20:09:17 +08:00
myh
0d84bba234 测试图片 2025-04-19 19:01:07 +08:00
myh
c81de41b3e 添加三种不同模式 2025-04-19 18:59:35 +08:00
myh
b8ffb902b3 忽略三方库文件夹 2025-04-19 18:59:14 +08:00
myh
da36a8fc09 添加参数控制列表 2025-04-19 18:58:44 +08:00
myh
45db741f35 删除无用文件 2025-04-19 13:08:24 +08:00
myh
5df0e15baf 静态图片测试 2025-04-19 13:08:15 +08:00
myh
5e72ac28cc yolo模型文件 2025-04-19 13:07:47 +08:00
myh
5b61b48d50 依赖包 2025-04-19 13:07:37 +08:00
myh
160bb2e365 测试图片 2025-04-19 13:07:28 +08:00
myh
65ee0565c2 集成YOLOv8 2025-04-18 22:51:46 +08:00
myh
ca275ba74b 图像融合模块 2025-04-18 22:15:37 +08:00
myh
1cfc280f34 联邦学习模块 2025-04-18 22:15:25 +08:00
myh
f5e527e02e 排除.idea文件夹 2025-04-18 22:06:27 +08:00
32 changed files with 944 additions and 3 deletions

12
.gitignore vendored
View File

@@ -178,7 +178,7 @@ cython_debug/
# ---> JetBrains
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
.idea/
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
@@ -275,7 +275,8 @@ fabric.properties
.LSOverride
# Icon must end with two \r
Icon
Icon
# Thumbnails
._*
@@ -296,3 +297,10 @@ Network Trash Folder
Temporary Items
.apdisk
# project files
/whl_packages/
runs/
*.pt
*.cache
.vscode/
*.json

View File

@@ -1,3 +1,35 @@
# Graduation-Project
毕业设计基于YOLO和图像融合技术的无人机检测系统及安全性研究
毕业设计基于YOLO和图像融合技术的无人机检测系统及安全性研究
Linux 运行联邦训练
```bash
cd federated_learning
```
```bash
nohup python -u yolov8_fed.py > runtime.log 2>&1 &
```
Linux 运行集中训练
```bash
cd yolov8
```
```bash
nohup python -u yolov8_train.py > runtime.log 2>&1 &
```
实时监控日志文件
```bash
tail -f runtime.log
```
运行图像融合配准代码
```bash
cd image_fusion
```
```bash
python Image_Registration_test.py
```

BIN
dataset/train1/images/6.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

BIN
dataset/train1/images/7.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

View File

@@ -0,0 +1,2 @@
0 0.5375 0.37395833333333334 0.253125 0.16458333333333333
0 0.2890625 0.5833333333333334 0.196875 0.1125

View File

@@ -0,0 +1 @@
0 0.36328125 0.525 0.7109375 0.8083333333333333

View File

@@ -0,0 +1,4 @@
train: ./images
val: ../val
nc: 1
names: ['uav']

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

@@ -0,0 +1 @@
0 0.6934895833333333 0.6527777777777778 0.008854166666666666 0.018518518518518517

View File

@@ -0,0 +1 @@
0 0.423698 0.593519 0.061979 0.029630

View File

@@ -0,0 +1,4 @@
train: ./images
val: ../val
nc: 1
names: ['uav']

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

View File

@@ -0,0 +1 @@
0 0.5109375 0.5322916666666667 0.125 0.13958333333333334

View File

@@ -0,0 +1 @@
0 0.55078125 0.296875 0.0890625 0.08958333333333333

View File

@@ -0,0 +1,16 @@
# 创建测试目录结构
mkdir -p ./test_data/{client1,client2}/{train,val}/images
mkdir -p ./test_data/{client1,client2}/{train,val}/labels
# 生成虚拟数据各客户端仅需2张图片
for client in client1 client2; do
for split in train val; do
# 创建空图片128x128 RGB
magick -size 128x128 xc:white test_data/${client}/${split}/images/img1.jpg
magick -size 128x128 xc:black test_data/${client}/${split}/images/img2.jpg
# 创建示例标签文件
echo "0 0.5 0.5 0.2 0.2" > test_data/${client}/${split}/labels/img1.txt
echo "1 0.3 0.3 0.4 0.4" > test_data/${client}/${split}/labels/img2.txt
done
done

View File

View File

@@ -0,0 +1,4 @@
train: ../test_data/client1/train/images
val: ../test_data/client1/val/images
nc: 2
names: [ 'class0', 'class1' ]

View File

@@ -0,0 +1,4 @@
train: ../test_data/client2/train/images
val: ../test_data/client2/val/images
nc: 2
names: [ 'class0', 'class1' ]

Binary file not shown.

View File

@@ -0,0 +1,49 @@
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
# Ultralytics YOLOv8 object detection model with P3/8 - P5/32 outputs
# Model docs: https://docs.ultralytics.com/models/yolov8
# Task docs: https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 129 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPS
s: [0.33, 0.50, 1024] # YOLOv8s summary: 129 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPS
m: [0.67, 0.75, 768] # YOLOv8m summary: 169 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPS
l: [1.00, 1.00, 512] # YOLOv8l summary: 209 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPS
x: [1.00, 1.25, 512] # YOLOv8x summary: 209 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPS
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv8.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 12
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 15 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 12], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 18 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 9], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 21 (P5/32-large)
- [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)

View File

@@ -0,0 +1,252 @@
import glob
import os
from pathlib import Path
import json
from pydoc import cli
from threading import local
import yaml
from ultralytics import YOLO
import copy
import torch
# ------------ 新增联邦学习工具函数 ------------
def federated_avg(global_model, client_weights):
"""联邦平均核心算法"""
# 计算总样本数
total_samples = sum(n for _, n in client_weights)
if total_samples == 0:
raise ValueError("Total number of samples must be positive.")
# DEBUG: global_dict
# print(global_model)
# 获取YOLO底层PyTorch模型参数
global_dict = global_model.model.state_dict()
# 提取所有客户端的 state_dict 和对应样本数
state_dicts, sample_counts = zip(*client_weights)
# 克隆参数并脱离计算图
global_dict_copy = {
k: v.clone().detach().requires_grad_(False) for k, v in global_dict.items()
}
# 聚合可训练且存在的参数
for key in global_dict_copy:
# if global_dict_copy[key].dtype != torch.float32:
# continue
# if any(
# x in key for x in ["running_mean", "running_var", "num_batches_tracked"]
# ):
# continue
# 检查所有客户端是否包含当前键
all_clients_have_key = all(key in sd for sd in state_dicts)
if all_clients_have_key:
# 计算每个客户端的加权张量
# weighted_tensors = [
# client_state[key].float() * (sample_count / total_samples)
# for client_state, sample_count in zip(state_dicts, sample_counts)
# ]
weighted_tensors = []
for client_state, sample_count in zip(state_dicts, sample_counts):
weight = sample_count / total_samples # 计算权重
weighted_tensor = client_state[key].float() * weight # 加权张量
weighted_tensors.append(weighted_tensor)
# 聚合加权张量并更新全局参数
global_dict_copy[key] = torch.stack(weighted_tensors, dim=0).sum(dim=0)
# else:
# print(f"错误: 键 {key} 在部分客户端缺失,已保留全局参数")
# 终止训练或记录日志
# raise KeyError(f"键 {key} 缺失")
# 加载回YOLO模型
global_model.model.load_state_dict(global_dict_copy, strict=True)
# global_model.model.train()
# with torch.no_grad():
# global_model.model.load_state_dict(global_dict_copy, strict=True)
# 定义多个关键层
MONITOR_KEYS = [
"model.0.conv.weight",
"model.1.conv.weight",
"model.3.conv.weight",
"model.5.conv.weight",
"model.7.conv.weight",
"model.9.cv1.conv.weight",
"model.12.cv1.conv.weight",
"model.15.cv1.conv.weight",
"model.18.cv1.conv.weight",
"model.21.cv1.conv.weight",
"model.22.dfl.conv.weight",
]
with open("aggregation_check.txt", "a") as f:
f.write("\n=== 参数聚合检查 ===\n")
for key in MONITOR_KEYS:
# if key not in global_dict:
# continue
# if not all(key in sd for sd in state_dicts):
# continue
# 计算聚合后均值
aggregated_mean = global_dict[key].mean().item()
# 计算各客户端均值
client_means = [sd[key].float().mean().item() for sd in state_dicts]
with open("aggregation_check.txt", "a") as f:
f.write(f"'{key}' 聚合后均值: {aggregated_mean:.6f}\n")
f.write(f"各客户端该层均值差异: {[f'{cm:.6f}' for cm in client_means]}\n")
f.write(f"客户端最大差异: {max(client_means) - min(client_means):.6f}\n\n")
return global_model
# ------------ 修改训练流程 ------------
def federated_train(num_rounds, clients_data):
# ========== 初始化指标记录 ==========
metrics = {
"round": [],
"val_mAP": [], # 每轮验证集mAP
# "train_loss": [], # 每轮平均训练损失
"client_mAPs": [], # 各客户端本地模型在验证集上的mAP
"communication_cost": [], # 每轮通信开销MB
}
# 初始化全局模型
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
global_model = (
YOLO("/home/image1325/DATA/Graduation-Project/federated_learning/yolov8n.yaml")
.load("/home/image1325/DATA/Graduation-Project/federated_learning/yolov8n.pt")
.to(device)
)
global_model.model.model[-1].nc = 1 # 设置检测类别数为1
# global_model.model.train.ema.enabled = False
# 克隆全局模型
local_model = copy.deepcopy(global_model)
for _ in range(num_rounds):
client_weights = []
# 各客户端的训练损失
# client_losses = []
# DEBUG: 检查全局模型参数
# global_dict = global_model.model.state_dict()
# print(global_dict.keys())
# 每个客户端本地训练
for data_path in clients_data:
# 统计本地训练样本数
with open(data_path, "r") as f:
config = yaml.safe_load(f)
# Resolve img_dir relative to the YAML file's location
yaml_dir = os.path.dirname(data_path)
img_dir = os.path.join(
yaml_dir, config.get("train", data_path)
) # 从配置文件中获取图像目录
# print(f"Image directory: {img_dir}")
num_samples = (
len(glob.glob(os.path.join(img_dir, "*.jpg")))
+ len(glob.glob(os.path.join(img_dir, "*.png")))
+ len(glob.glob(os.path.join(img_dir, "*.jpeg")))
)
# print(f"Number of images: {num_samples}")
local_model.model.load_state_dict(
global_model.model.state_dict(), strict=True
)
# 本地训练(保持你的原有参数设置)
local_model.train(
name=f"train{_ + 1}", # 当前轮次
data=data_path,
# model=local_model,
epochs=16, # 每轮本地训练多少个epoch
# save_period=16,
imgsz=768, # 图像大小
verbose=False, # 关闭冗余输出
batch=-1, # 批大小
workers=6, # 工作线程数
)
# 记录客户端训练损失
# client_loss = results.results_dict['train_loss']
# client_losses.append(client_loss)
# 收集模型参数及样本数
client_weights.append((local_model.model.state_dict(), num_samples))
# 聚合参数更新全局模型
global_model = federated_avg(global_model, client_weights)
# DEBUG: 检查全局模型参数
# keys = global_model.model.state_dict().keys()
# ========== 评估全局模型 ==========
# 复制全局模型以避免在评估时修改参数
val_model = copy.deepcopy(global_model)
# 评估全局模型在验证集上的性能
with torch.no_grad():
val_results = val_model.val(
data="/mnt/DATA/uav_dataset_old/UAVdataset/fed_data.yaml", # 指定验证集配置文件
imgsz=768, # 图像大小
batch=16, # 批大小
verbose=False, # 关闭冗余输出
)
# 丢弃评估模型
del val_model
# DEBUG: 检查全局模型参数
# if keys != global_model.model.state_dict().keys():
# print("模型参数不一致!")
val_mAP = val_results.box.map # 获取mAP@0.5
# 计算平均训练损失
# avg_train_loss = sum(client_losses) / len(client_losses)
# 计算通信开销(假设传输全部模型参数)
model_size = sum(p.numel() * 4 for p in global_model.model.parameters()) / (
1024**2
) # MB
# 记录到指标容器
metrics["round"].append(_ + 1)
metrics["val_mAP"].append(val_mAP)
# metrics['train_loss'].append(avg_train_loss)
metrics["communication_cost"].append(model_size)
# 打印当前轮次结果
with open("aggregation_check.txt", "a") as f:
f.write(f"\n[Round {_ + 1}/{num_rounds}]\n")
f.write(f"Validation mAP@0.5: {val_mAP:.4f}\n")
# f.write(f"Average Train Loss: {avg_train_loss:.4f}")
f.write(f"Communication Cost: {model_size:.2f} MB\n\n")
return global_model, metrics
if __name__ == "__main__":
# 联邦训练配置
clients_config = [
"/mnt/DATA/uav_fed/train1/train1.yaml", # 客户端1数据路径
"/mnt/DATA/uav_fed/train2/train2.yaml", # 客户端2数据路径
]
# 使用本地数据集进行测试
# clients_config = [
# "/home/image1325/DATA/Graduation-Project/dataset/train1/train1.yaml",
# "/home/image1325/DATA/Graduation-Project/dataset/train2/train2.yaml",
# ]
# 运行联邦训练
final_model, metrics = federated_train(num_rounds=10, clients_data=clients_config)
# 保存最终模型
final_model.save("yolov8n_federated.pt")
# final_model.export(format="onnx") # 导出为ONNX格式
with open("metrics.json", "w") as f:
json.dump(metrics, f, indent=4)

View File

@@ -0,0 +1,354 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import argparse
import cv2
import numpy as np
from ultralytics import YOLO
from skimage.metrics import structural_similarity as ssim
# 添加YOLOv8模型初始化
yolo_model = YOLO("best.pt") # 可替换为yolov8s/m/l等
yolo_model.to('cuda') # 启用GPU加速
def calculate_en(img):
"""计算信息熵(处理灰度图)"""
hist = cv2.calcHist([img], [0], None, [256], [0, 256])
hist = hist / hist.sum()
return -np.sum(hist * np.log2(hist + 1e-10))
def calculate_sf(img):
"""计算空间频率(处理灰度图)"""
rf = np.sqrt(np.mean(np.square(np.diff(img, axis=0))))
cf = np.sqrt(np.mean(np.square(np.diff(img, axis=1))))
return np.sqrt(rf ** 2 + cf ** 2)
def calculate_mi(img1, img2):
"""计算互信息(处理灰度图)"""
hist_2d = np.histogram2d(img1.ravel(), img2.ravel(), 256)[0]
pxy = hist_2d / hist_2d.sum()
px = np.sum(pxy, axis=1)
py = np.sum(pxy, axis=0)
return np.sum(pxy * np.log2(pxy / (px[:, None] * py[None, :] + 1e-10) + 1e-10))
def calculate_ssim(img1, img2):
"""计算SSIM处理灰度图"""
return ssim(img1, img2, data_range=255)
# 裁剪线性RGB对比度拉伸去掉2%百分位以下的数去掉98%百分位以上的数,上下百分位数一般相同,并设置输出上下限)
def truncated_linear_stretch(image, truncated_value=2, maxout=255, min_out=0):
"""
:param image:
:param truncated_value:
:param maxout:
:param min_out:
:return:
"""
def gray_process(gray, maxout=maxout, minout=min_out):
truncated_down = np.percentile(gray, truncated_value)
truncated_up = np.percentile(gray, 100 - truncated_value)
gray_new = ((maxout - minout) / (truncated_up - truncated_down)) * gray
gray_new[gray_new < minout] = minout
gray_new[gray_new > maxout] = maxout
return np.uint8(gray_new)
(b, g, r) = cv2.split(image)
b = gray_process(b)
g = gray_process(g)
r = gray_process(r)
result = cv2.merge((b, g, r)) # 合并每一个通道
return result
# RGB图片配准函数采用白天的可见光与红外灰度图计算两者Surf共同特征点之间的仿射矩阵。
def Images_matching(img_base, img_target):
"""
:param img_base:
:param img_target:匹配图像
:return: 返回仿射矩阵
"""
start = time.time()
orb = cv2.ORB_create()
# 对可见光图像进行对比度拉伸
# img_base = truncated_linear_stretch(img_base)
img_base = cv2.cvtColor(img_base, cv2.COLOR_BGR2GRAY)
sift = cv2.SIFT_create()
# 使用sift算子计算特征点和特征点周围的特征向量
st1 = time.time()
kp1, des1 = sift.detectAndCompute(img_base, None) # 1136 1136, 64
kp2, des2 = sift.detectAndCompute(img_target, None)
en1 = time.time()
# print(en1 - st1, "特征提取")
# 进行KNN特征匹配
# FLANN_INDEX_KDTREE = 0 # 建立FLANN匹配器的参数
# indexParams = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) # 配置索引密度树的数量为5
# searchParams = dict(checks=50) # 指定递归次数
# flann = cv2.FlannBasedMatcher(indexParams, searchParams) # 建立匹配器
# matches = flann.knnMatch(des1, des2, k=2) # 得出匹配的关键点 list: 1136
# FLANN_INDEX_KDTREE = 1
# index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
# search_params = dict(checks=50)
# flann = cv2.FlannBasedMatcher(index_params, search_params)
# matches = flann.knnMatch(des1, des2, k=2)
st2 = time.time()
matcher = cv2.BFMatcher()
matches = matcher.knnMatch(des1, des2, k=2)
de2 = time.time()
# print(de2 - st2, "特征匹配")
good = []
# 提取优秀的特征点
for m, n in matches:
if m.distance < 0.75 * n.distance: # 如果第一个邻近距离比第二个邻近距离的0.7倍小,则保留
good.append(m) # 134
src_pts = np.array([kp1[m.queryIdx].pt for m in good]) # 查询图像的特征描述子索引 # 134, 2
dst_pts = np.array([kp2[m.trainIdx].pt for m in good]) # 训练(模板)图像的特征描述子索引
if len(src_pts) <= 4:
print("Not enough matches are found - {}/{}".format(len(good), 4))
return 0, None, 0
else:
print(len(dst_pts), len(src_pts), "配准坐标点")
H = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 4) # 生成变换矩阵 H[0]: 3, 3 H[1]: 134, 1
end = time.time()
times = end - start
# print("配准时间", times)
return 1, H[0], len(dst_pts)
def fusions(img_vl, img_inf):
"""
:param img_vl: 原图像
:param img_inf: 红外图像
:return:
"""
img_YUV = cv2.cvtColor(img_vl, cv2.COLOR_BGR2YUV) # 如果输入是BGR需转换
# img_YUV = cv2.cvtColor(img_vl, cv2.COLOR_RGB2YUV)
y, u, v = cv2.split(img_YUV) # 分离通道,获取Y通道
Yf = y * 0.5 + img_inf * 0.5
Yf = Yf.astype(np.uint8)
fusion = cv2.cvtColor(cv2.merge((Yf, u, v)), cv2.COLOR_YUV2RGB)
return fusion
def removeBlackBorder(gray):
"""
移除缝合后的图像的多余黑边
输入:
image三维numpy矩阵待处理图像
输出:
裁剪后的图像
"""
threshold = 40 # 阈值
nrow = gray.shape[0] # 获取图片尺寸
ncol = gray.shape[1]
rowc = gray[:, int(1 / 2 * nrow)] # 无法区分黑色区域超过一半的情况
colc = gray[int(1 / 2 * ncol), :]
rowflag = np.argwhere(rowc > threshold)
colflag = np.argwhere(colc > threshold)
left, bottom, right, top = rowflag[0, 0], colflag[-1, 0], rowflag[-1, 0], colflag[0, 0]
# cv2.imshow('name', gray[left:right, top:bottom]) # 效果展示
cv2.waitKey(1)
return gray[left:right, top:bottom], left, right, top, bottom
def main(matchimg_vi, matchimg_in):
"""
:param matchimg_vi: 可见光图像
:param matchimg_in: 红外图像
:return: 融合好的图像(带检测结果)
"""
try:
orimg_vi = matchimg_vi
orimg_in = matchimg_in
h, w = orimg_vi.shape[:2] # 480 640
# (3, 3)//获取对应的配准坐标点
flag, H, dot = Images_matching(matchimg_vi, matchimg_in)
if flag == 0:
return 0, None, 0, 0.0, 0.0, 0.0, 0.0
else:
# 配准处理
matched_ni = cv2.warpPerspective(orimg_in, H, (w, h))
matched_ni, left, right, top, bottom = removeBlackBorder(matched_ni)
# 裁剪可见光图像
# fusion = fusions(orimg_vi[left:right, top:bottom], matched_ni)
# 不裁剪可见光图像
fusion = fusions(orimg_vi, matched_ni)
# 转换为灰度计算指标
fusion_gray = cv2.cvtColor(fusion, cv2.COLOR_RGB2GRAY)
cropped_vi_gray = cv2.cvtColor(orimg_vi, cv2.COLOR_BGR2GRAY)
matched_ni_gray = matched_ni # 红外图已经是灰度
# 计算指标
en = calculate_en(fusion_gray)
sf = calculate_sf(fusion_gray)
mi_visible = calculate_mi(fusion_gray, cropped_vi_gray)
mi_infrared = calculate_mi(fusion_gray, matched_ni_gray)
mi_total = mi_visible + mi_infrared
# 添加SSIM容错处理
try:
ssim_visible = calculate_ssim(fusion_gray, cropped_vi_gray)
ssim_infrared = calculate_ssim(fusion_gray, matched_ni_gray)
ssim_avg = (ssim_visible + ssim_infrared) / 2
except Exception as ssim_error:
print(f"SSIM计算错误: {ssim_error}")
ssim_avg = -1 # 用-1表示计算失败
# YOLOv8目标检测
results = yolo_model(fusion) # 输入融合后的图像
annotated_image = results[0].plot() # 绘制检测框
# 返回带检测结果的图像
return 1, annotated_image, dot, en, sf, mi_total, ssim_avg
except Exception as e:
print(f"Error in fusion/detection: {e}")
return 0, None, 0, 0.0, 0.0, 0.0, 0.0
def parse_args():
# 输入可见光和红外图像路径
visible_image_path = "./test/visible/visibleI0195.jpg" # 可见光图片路径
infrared_image_path = "./test/infrared/infraredI0195.jpg" # 红外图片路径
# 输入可见光和红外视频路径
visible_video_path = "./test/visible.mp4" # 可见光视频路径
infrared_video_path = "./test/infrared.mp4" # 红外视频路径
"""解析命令行参数"""
parser = argparse.ArgumentParser(description='图像融合与目标检测')
parser.add_argument('--mode', type=str, choices=['video', 'image'], default='image',
help='输入模式video视频流 或 image静态图片')
# 区分摄像头或视频文件
parser.add_argument('--source', type=str, choices=['camera', 'file'],
help='视频输入类型camera摄像头或 file视频文件')
# 视频模式参数
parser.add_argument('--video1', type=str, default=visible_video_path,
help='可见光视频路径仅在source=file时需要')
parser.add_argument('--video2', type=str, default=infrared_video_path,
help='红外视频路径仅在source=file时需要')
# 摄像头模式参数
parser.add_argument('--camera_id1', type=int, default=0,
help='可见光摄像头ID仅在source=camera时需要默认0')
parser.add_argument('--camera_id2', type=int, default=1,
help='红外摄像头ID仅在source=camera时需要默认1')
parser.add_argument('--output', type=str, default='output.mp4',
help='输出视频路径仅在video模式需要')
# 图片模式参数
parser.add_argument('--visible', type=str, default=visible_image_path,
help='可见光图片路径仅在image模式需要')
parser.add_argument('--infrared', type=str, default=infrared_image_path,
help='红外图片路径仅在image模式需要')
return parser.parse_args()
if __name__ == '__main__':
time_all = 0
dots = 0
i = 0
args = parse_args()
if args.mode == 'video':
if args.source == 'file':
# ========== 视频流处理模式 ==========
if not args.video1 or not args.video2:
raise ValueError("视频模式需要指定 --video1 和 --video2 参数")
capture = cv2.VideoCapture(args.video2)
capture2 = cv2.VideoCapture(args.video1)
elif args.source == 'camera':
# ========== 摄像头处理模式 ==========
capture = cv2.VideoCapture(args.camera_id1)
capture2 = cv2.VideoCapture(args.camera_id2)
else:
raise ValueError("必须指定 --source 参数camera 或 file")
# 公共视频处理逻辑
fps = capture.get(cv2.CAP_PROP_FPS) if args.source == 'file' else 30
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(args.output, fourcc, fps, (640, 480))
while True:
ret1, frame_vi = capture.read() # 可见光帧
ret2, frame_ir = capture2.read() # 红外帧
if not ret1 or not ret2:
break
# 红外图像转灰度
frame_ir_gray = cv2.cvtColor(frame_ir, cv2.COLOR_BGR2GRAY)
# 执行融合与检测
flag, fusion, _ = main(frame_vi, frame_ir_gray)
if flag == 1:
cv2.imshow("Fusion with YOLOv8 Detection", fusion)
out.write(fusion)
if cv2.waitKey(1) == ord('q'):
break
# 释放资源
capture.release()
capture2.release()
out.release()
cv2.destroyAllWindows()
elif args.mode == 'image':
# ========= 图片处理模式 ==========
if not args.infrared or not args.visible:
raise ValueError("图片模式需要指定 --visible 和 --infrared 参数")
# 读取图像
img_visible = cv2.imread(args.visible)
img_infrared = cv2.imread(args.infrared)
if img_visible is None or img_infrared is None:
print("Error: 图片加载失败,请检查路径!")
exit()
# 转换为灰度图(红外图像处理)
img_inf_gray = cv2.cvtColor(img_infrared, cv2.COLOR_BGR2GRAY)
# 执行融合与检测
flag, fusion_result, dot, en, sf, mi, ssim_val = main(img_visible, img_inf_gray)
if flag == 1:
# 展示评价指标
print("\n======== 融合质量评价 ========")
print(f"信息熵EN: {en:.2f}")
print(f"空间频率SF: {sf:.2f}")
print(f"互信息MI: {mi:.2f}")
# 条件显示SSIM
if ssim_val >= 0:
print(f"结构相似性SSIM: {ssim_val:.4f}")
else:
print("结构相似性SSIM: 计算失败(已跳过)")
print(f"配准点数: {dot}")
# 显示并保存结果
# cv2.imshow("Fusion with Detection", fusion_result)
cv2.imwrite("output/fusion_result.jpg", fusion_result)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
else:
print("融合失败!")

View File

@@ -0,0 +1,147 @@
# -*- coding: utf-8 -*-
# @Time :
# @Author :
import cv2
import numpy as np
sift = cv2.SIFT_create()
def compuerSift2GetPts(img1, img2):
# sift 查找关键点,关键点 And 描述
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
matcher = cv2.BFMatcher()
raw_matches = matcher.knnMatch(des1, des2, k=2)
good_matches = []
ratio = 0.75
for m1, m2 in raw_matches:
# 如果最接近和次接近的比值大于一个既定的值那么我们保留这个最接近的值认为它和其匹配的点为good_match
if m1.distance < ratio * m2.distance:
good_matches.append([m1])
matches = cv2.drawMatchesKnn(img1, kp1, img2, kp2, good_matches, None, flags=2)
ptsA = np.float32([kp1[m[0].queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
ptsB = np.float32([kp2[m[0].trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)
ransacReprojThreshold = 4
# 单应性矩阵可以将一张图通过旋转、变换等方式与另一张图对齐
# print(len(ptsA), len(ptsB))
if len(ptsA) == 0: return ptsA, ptsB, 0
H, status = cv2.findHomography(ptsA, ptsB, cv2.RANSAC, ransacReprojThreshold)
cv2.imshow("matcher", matches)
cv2.waitKey(100)
return ptsA, ptsB, 1
def findBestDistanceAndPts(ptsA, ptsB):
x_dct = {}
y_dct = {}
best_x, best_y = int(ptsA[0][0][0] - ptsB[0][0][0]), int(ptsA[0][0][1] - ptsB[0][0][1])
x_cnt, y_cnt = 0, 0
for i in range(len(ptsA)):
# print(ptsA[i], ' ', ptsB[i])
x_dis = int(ptsA[i][0][0] - ptsB[i][0][0])
y_dis = int(ptsA[i][0][1] - ptsB[i][0][1])
# print(x_dis)
if x_dis in x_dct:
x_dct.update({x_dis: int(x_dct.get(x_dis) + 1)})
if x_dct.get(x_dis) > x_cnt:
best_x = x_dis
x_cnt = x_dct.get(x_dis)
# print(x_dct.get(x_dis))
else:
x_dct.update({x_dis: 1})
# print(x_dct.get(x_dis))
# print(y_dis)
if y_dis in y_dct:
y_dct.update({y_dis: int(y_dct.get(y_dis) + 1)})
if y_dct.get(y_dis) > y_cnt:
best_y = y_dis
y_cnt = y_dct.get(y_dis)
# print(y_dct.get(y_dis))
else:
y_dct.update({y_dis: 1})
# print(y_dct.get(y_dis))
print(best_x, best_y)
pt = []
ptb = []
for i in range(len(ptsA)):
x_dis = int(ptsA[i][0][0] - ptsB[i][0][0])
y_dis = int(ptsA[i][0][1] - ptsB[i][0][1])
if abs(best_x - x_dis) <= 0:
pt.append([ptsA[i][0][0], ptsA[i][0][1]])
# print(pt)
return pt, best_x, best_y
def minDistanceHasXy(ptsA, ptsB):
dct = {}
cnt = 0
best = 's'
for i in range(len(ptsA)):
disx = int(ptsA[i][0][0] - ptsB[i][0][0] + 0.5)
disy = int(ptsA[i][0][1] - ptsB[i][0][1] + 0.5)
s = str(disx) + ',' + str(disy)
# print(s)
if s in dct:
dct.updata({s: int(dct.get(s) + 1)})
if dct.get(s) >= cnt:
cnt = dct.get(s)
best = s
print(s)
else:
dct.update({s: int(1)})
for i, j in dct.items():
print(i, j)
print(best)
def detectImg(img1, img2, pta, best_x, best_y):
# print(pta)
min_x = int(min(x[0] for x in pta))
max_x = int(max(x[0] for x in pta))
min_y = int(min(x[1] for x in pta))
max_y = int(max(x[1] for x in pta))
# print(min_x, max_x)
# print(min_x - best_x, max_x - best_x)
# print(min_y, max_y)
# print(min_y - best_y, max_y - best_y)
newimg1 = img1[min_y: max_y, min_x: max_x]
newimg2 = img2[min_y - best_y: max_y - best_y, min_x - best_x: max_x - best_x]
# cv2.imshow("newimg1", newimg1)
# cv2.imshow("newimg2", newimg2)
# cv2.waitKey(0)
return newimg1, newimg2
if __name__ == '__main__':
j = 0
for i in range(20, 4771, 1):
print(i)
path1 = './data/907dat/gray/camera1-' + str(i) + '.png'
path2 = './data/907dat/color/camera0-' + str(i) + '.png'
img1 = cv2.imread(path1)
img2 = cv2.imread(path2)
if (img1 is None or img2 is None): continue
PtsA, PtsB, f = compuerSift2GetPts(img1, img2)
if (f == 0): continue
pt, best_x, best_y = findBestDistanceAndPts(PtsA, PtsB)
newimg1, newimg2 = detectImg(img1, img2, pt, best_x, best_y)
if newimg1.shape[0] < 10 or newimg1.shape[1] < 10: continue
print(newimg1.shape, newimg2.shape)
# newimg1 = cv2.resize(newimg1, (320, 240))
# newimg2 = cv2.resize(newimg2, (320, 240))
wirtePath1 = './result/dat_result_2/gray/camera1-' + str(j) + '.png'
wirtePath2 = './result/dat_result_2/color/camera0-' + str(j) + '.png'
if newimg1.shape[0] > 255 and newimg1.shape[1] > 255 and newimg1.shape == newimg2.shape:
# cv2.imwrite(wirtePath1, newimg1)
# cv2.imwrite(wirtePath2, newimg2)
j += 1
cv2.imshow("newimg1", newimg1)
cv2.imshow("newimg2", newimg2)
cv2.waitKey()
print(j)
pass

0
image_fusion/__init__.py Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

41
requirements.txt Normal file
View File

@@ -0,0 +1,41 @@
certifi==2025.1.31
charset-normalizer==3.4.1
colorama==0.4.6
contourpy==1.3.2
cycler==0.12.1
filelock==3.18.0
fonttools==4.57.0
fsspec==2025.3.2
idna==3.10
Jinja2==3.1.6
kiwisolver==1.4.8
MarkupSafe==3.0.2
matplotlib==3.10.1
mpmath==1.3.0
networkx==3.4.2
numpy==2.1.1
opencv-python==4.11.0.86
packaging==24.2
pandas==2.2.3
pillow==11.2.1
psutil==7.0.0
py-cpuinfo==9.0.0
pyparsing==3.2.3
python-dateutil==2.9.0.post0
pytz==2025.2
PyYAML==6.0.2
requests==2.32.3
scipy==1.15.2
seaborn==0.13.2
setuptools==78.1.0
six==1.17.0
sympy==1.13.1
torch==2.6.0+cu124
torchaudio==2.6.0+cu124
torchvision==0.21.0+cu124
tqdm==4.67.1
typing_extensions==4.13.2
tzdata==2025.2
ultralytics==8.3.111
ultralytics-thop==2.0.14
urllib3==2.4.0

6
yolov8/yolov8.yaml Normal file
View File

@@ -0,0 +1,6 @@
train: /mnt/DATA/dataset/uav_dataset/train/images/
val: /mnt/DATA/dataset/uav_dataset/val/images/
test: /mnt/DATA/dataset/test2/images/
# number of classes
nc: 1
names: ['uav']

13
yolov8/yolov8_train.py Normal file
View File

@@ -0,0 +1,13 @@
from ultralytics import YOLO
# 加载预训练模型
model = YOLO('../yolov8n.pt')
# 开始训练
model.train(
data='./yolov8.yaml', # 数据配置文件路径
epochs=320, # 训练轮数
batch=-1, # 批量大小
imgsz=640, # 输入图片大小
device=0 # 使用的设备0 表示 GPU'cpu' 表示 CPU
)