機(jī)關(guān)網(wǎng)站建設(shè)管理工作自查報(bào)告廈門總?cè)O(shè)計(jì)裝飾工程有限公司
鶴壁市浩天電氣有限公司
2026/01/24 07:07:07
機(jī)關(guān)網(wǎng)站建設(shè)管理工作自查報(bào)告,廈門總?cè)O(shè)計(jì)裝飾工程有限公司,淘寶優(yōu)惠券微網(wǎng)站開發(fā),開發(fā)公司起名# DAY 43 隨機(jī)函數(shù)與廣播機(jī)制
知識(shí)點(diǎn)回顧:
1. 隨機(jī)張量的生成: torch.randn函數(shù)
2. 卷積和池化的計(jì)算公式 (可以不掌握, 會(huì)自動(dòng)計(jì)算的)
3. pytorch的廣播機(jī)制: 加法和乘法的廣播機(jī)制
ps: numpy運(yùn)算也有類似的廣播機(jī)制, 基本一致
作業(yè): 自己多舉幾個(gè)例子幫助自己理解即可…# DAY 43 隨機(jī)函數(shù)與廣播機(jī)制知識(shí)點(diǎn)回顧:1. 隨機(jī)張量的生成: torch.randn函數(shù)2. 卷積和池化的計(jì)算公式 (可以不掌握, 會(huì)自動(dòng)計(jì)算的)3. pytorch的廣播機(jī)制: 加法和乘法的廣播機(jī)制ps: numpy運(yùn)算也有類似的廣播機(jī)制, 基本一致作業(yè): 自己多舉幾個(gè)例子幫助自己理解即可# CBAM 注意力知識(shí)點(diǎn)回顧:1. 通道注意力模塊復(fù)習(xí)2. 空間注意力模塊3. CBAM的定義作業(yè)嘗試對今天的模型檢查參數(shù)數(shù)目并用 tensorboard 查看訓(xùn)練過程浙大疏錦行# 導(dǎo)入依賴庫 import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt # 1. 模型參數(shù)統(tǒng)計(jì)工具統(tǒng)計(jì)模型總參數(shù)、可訓(xùn)練/不可訓(xùn)練參數(shù)數(shù)量 def count_model_parameters(model): total_params 0 trainable_params 0 non_trainable_params 0 for param in model.parameters(): param_num param.numel() total_params param_num if param.requires_grad: trainable_params param_num else: non_trainable_params param_num def format_params(num): if num 1e6: return f{num/1e6:.2f}M elif num 1e3: return f{num/1e3:.2f}K else: return f{num} print(模型參數(shù)統(tǒng)計(jì)) print(f總參數(shù): {format_params(total_params)} ({total_params:,})) print(f可訓(xùn)練參數(shù): {format_params(trainable_params)} ({trainable_params:,})) print(f不可訓(xùn)練參數(shù): {format_params(non_trainable_params)} ({non_trainable_params:,})) return total_params, trainable_params, non_trainable_params # 2. CBAM注意力模塊定義 class CBAMBlock(nn.Module): def __init__(self, channels, reduction16): super().__init__() # 通道注意力 self.avg_pool nn.AdaptiveAvgPool2d(1) self.max_pool nn.AdaptiveMaxPool2d(1) self.fc nn.Sequential( nn.Linear(channels, channels//reduction), nn.ReLU(inplaceTrue), nn.Linear(channels//reduction, channels) ) # 空間注意力 self.spatial nn.Sequential( nn.Conv2d(2, 1, kernel_size7, padding3, biasFalse), nn.Sigmoid() ) self.sigmoid nn.Sigmoid() def forward(self, x): # 通道注意力計(jì)算 avg_out self.fc(self.avg_pool(x).view(x.size(0), -1)).view(x.size(0), x.size(1), 1, 1) max_out self.fc(self.max_pool(x).view(x.size(0), -1)).view(x.size(0), x.size(1), 1, 1) channel_att self.sigmoid(avg_out max_out) x x * channel_att # 空間注意力計(jì)算 avg_out torch.mean(x, dim1, keepdimTrue) max_out, _ torch.max(x, dim1, keepdimTrue) spatial_att self.spatial(torch.cat([avg_out, max_out], dim1)) x x * spatial_att return x # 3. 基于CBAM的CNN模型定義 class CBAM_CNN(nn.Module): def __init__(self, num_classes10): super().__init__() self.features nn.Sequential( nn.Conv2d(3, 64, kernel_size3, padding1), nn.ReLU(inplaceTrue), CBAMBlock(64), nn.MaxPool2d(2, 2), nn.Conv2d(64, 128, kernel_size3, padding1), nn.ReLU(inplaceTrue), CBAMBlock(128), nn.MaxPool2d(2, 2), ) self.classifier nn.Linear(128 * 8 * 8, num_classes) def forward(self, x): x self.features(x) x x.view(x.size(0), -1) x self.classifier(x) return x # 4. 訓(xùn)練函數(shù)集成TensorBoard可視化 def train(model, train_loader, test_loader, criterion, optimizer, scheduler, device, epochs, writer): model.train() all_iter_losses [] iter_indices [] for epoch in range(epochs): running_loss 0.0 correct 0 total 0 for batch_idx, (data, target) in enumerate(train_loader): data, target data.to(device), target.to(device) optimizer.zero_grad() output model(data) loss criterion(output, target) loss.backward() optimizer.step() iter_loss loss.item() global_step epoch * len(train_loader) batch_idx 1 all_iter_losses.append(iter_loss) iter_indices.append(global_step) # TensorBoard記錄batch級損失 writer.add_scalar(Train/Batch_Loss, iter_loss, global_step) running_loss iter_loss _, predicted output.max(1) total target.size(0) correct predicted.eq(target).sum().item() if (batch_idx 1) % 100 0: print(fEpoch {epoch1}/{epochs} | Batch {batch_idx1}/{len(train_loader)} | 單Batch損失: {iter_loss:.4f}) # 記錄epoch級訓(xùn)練指標(biāo) epoch_train_loss running_loss / len(train_loader) epoch_train_acc 100. * correct / total writer.add_scalar(Train/Epoch_Loss, epoch_train_loss, epoch1) writer.add_scalar(Train/Epoch_Accuracy, epoch_train_acc, epoch1) # 測試階段 model.eval() test_loss 0 correct_test 0 total_test 0 with torch.no_grad(): for data, target in test_loader: data, target data.to(device), target.to(device) output model(data) test_loss criterion(output, target).item() _, predicted output.max(1) total_test target.size(0) correct_test predicted.eq(target).sum().item() # 記錄epoch級測試指標(biāo) epoch_test_loss test_loss / len(test_loader) epoch_test_acc 100. * correct_test / total_test writer.add_scalar(Test/Epoch_Loss, epoch_test_loss, epoch1) writer.add_scalar(Test/Epoch_Accuracy, epoch_test_acc, epoch1) scheduler.step(epoch_test_loss) print(fEpoch {epoch1} 完成 | 訓(xùn)練準(zhǔn)確率: {epoch_train_acc:.2f}% | 測試準(zhǔn)確率: {epoch_test_acc:.2f}%) writer.close() return epoch_test_acc # 5. 繪圖輔助函數(shù) def plot_iter_losses(losses, indices): plt.figure(figsize(10, 4)) plt.plot(indices, losses, b-, alpha0.7) plt.xlabel(Iteration) plt.ylabel(Loss) plt.title(Training Loss per Iteration) plt.grid(True) plt.show() # 6. 主執(zhí)行邏輯 if __name__ __main__: # 設(shè)備配置 device torch.device(cuda if torch.cuda.is_available() else cpu) # 數(shù)據(jù)加載CIFAR10 transform transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) train_dataset torchvision.datasets.CIFAR10(root./data, trainTrue, downloadTrue, transformtransform) test_dataset torchvision.datasets.CIFAR10(root./data, trainFalse, downloadTrue, transformtransform) train_loader DataLoader(train_dataset, batch_size64, shuffleTrue, num_workers2) test_loader DataLoader(test_dataset, batch_size64, shuffleFalse, num_workers2) # 模型、損失函數(shù)、優(yōu)化器初始化 model CBAM_CNN().to(device) criterion nn.CrossEntropyLoss() optimizer optim.Adam(model.parameters(), lr0.001) scheduler optim.lr_scheduler.ReduceLROnPlateau(optimizer, modemin, patience3, factor0.5) # 統(tǒng)計(jì)模型參數(shù) count_model_parameters(model) # 初始化TensorBoard writer SummaryWriter(log_dir./cbam_logs) # 啟動(dòng)訓(xùn)練 epochs 50 print(開始模型訓(xùn)練...) final_accuracy train(model, train_loader, test_loader, criterion, optimizer, scheduler, device, epochs, writer) print(f訓(xùn)練完成 | 最終測試準(zhǔn)確率: {final_accuracy:.2f}%)