✅博主简介:擅长数据搜集与处理、建模仿真、程序设计、仿真代码、论文写作与指导,毕业论文、期刊论文经验交流。
✅成品或者定制,扫描文章底部微信二维码。
(1) 基于注意力机制的序列与图双编码器分子属性预测模型
药物分子的属性预测是药物研发过程中的关键环节,准确预测分子的生物活性、毒性、溶解度等属性能够显著加速先导化合物的发现和优化过程。现有的分子属性预测方法大多采用单一的分子表征方式,要么基于分子的线性序列表示,要么基于分子的图结构表示,难以全面捕获分子的结构特征和化学性质。本研究提出了基于注意力机制的序列与图双编码器分子属性预测模型,同时利用分子的序列信息和图结构信息,构建全面的分子特征表示。在序列编码方面,采用频繁连续子序列算法对SMILES分子序列进行预处理,将长序列分解为具有化学意义的子结构单元或单个原子,这种分词方式能够保留分子中的官能团信息和结构模式。基于Transformer架构构建分子序列编码器,利用自注意力机制学习序列中不同位置之间的长程依赖关系,捕获分子中相距较远但存在化学相互作用的原子之间的关联。多头注意力机制使模型能够从多个角度学习序列特征,增强特征表示的丰富性。在图编码方面,将分子转换为图结构表示,原子作为节点,化学键作为边,节点特征包含原子类型、电荷、杂化状态等属性,边特征包含键类型、共轭性等属性。采用图注意力网络对分子图进行编码,通过注意力机制自适应地聚合邻居节点的特征信息,学习分子中关键原子和化学键的重要性权重。设计特征解码器将序列特征和图特征进行融合,通过交叉注意力机制实现两种表征之间的信息交互,最终输出分子属性的预测结果。实验在多个基准数据集上验证了模型的有效性,在六个数据集上取得了最优性能,证明了多维度编码对提升预测精度的重要作用。
(2) 融合三维空间信息与分子指纹的图神经网络属性预测模型
分子的三维空间构象对其生物活性具有重要影响,同一分子的不同构象可能表现出截然不同的药理学性质。现有的大多数分子属性预测模型仅考虑分子的二维拓扑结构,忽略了原子在三维空间中的相对位置关系,限制了预测精度的进一步提升。本研究提出了融合三维空间信息与分子指纹的图神经网络属性预测模型,综合考虑分子图、三维空间结构和分子指纹三种维度的特征表示,构建更加完整的分子描述。在三维空间信息的编码上,利用分子力场优化算法生成分子的三维构象,计算原子之间的欧氏距离和空间角度信息,构建包含几何特征的增强分子图。在图神经网络中引入距离编码层,将原子对之间的距离信息编码为可学习的特征向量,与原子特征和键特征共同参与消息传递过程。设计额外的注意力机制对空间近邻的原子给予更高的注意力权重,使模型能够捕获分子的三维构象特征。在分子指纹的处理上,组合使用两种互补的分子指纹,一种侧重于分子的拓扑特征,另一种侧重于分子的药效团特征,两种指纹的组合能够更全面地描述分子的化学性质。采用深度神经网络对分子指纹进行特征学习,提取高层次的抽象特征。模型的最终预测通过融合图神经网络提取的结构特征和深度网络提取的指纹特征得到,多路径的特征融合策略使模型能够综合利用多种来源的分子信息。实验结果表明,该模型在分类和回归任务上均取得了优异性能,验证了考虑三维空间信息和多维度特征融合对分子属性预测的重要意义。
(3) 分子关键结构可视化与模型可解释性分析
深度学习模型通常被视为黑盒模型,难以解释其预测结果的内在机理,这在药物研发领域是一个重要的局限性,因为研究人员需要理解哪些分子结构特征对活性起关键作用,以指导后续的分子优化设计。
import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader, Dataset from collections import Counter class FrequentSubsequenceTokenizer: def __init__(self, min_freq=10, max_length=5): self.min_freq = min_freq self.max_length = max_length self.vocab = {} def fit(self, smiles_list): subsequence_counts = Counter() for smiles in smiles_list: for length in range(1, min(self.max_length + 1, len(smiles) + 1)): for i in range(len(smiles) - length + 1): subseq = smiles[i:i+length] subsequence_counts[subseq] += 1 self.vocab = {subseq: idx + 4 for idx, (subseq, count) in enumerate(subsequence_counts.items()) if count >= self.min_freq} self.vocab['<PAD>'] = 0 self.vocab['<UNK>'] = 1 self.vocab['<CLS>'] = 2 self.vocab['<SEP>'] = 3 def tokenize(self, smiles): tokens = [self.vocab.get('<CLS>')] i = 0 while i < len(smiles): matched = False for length in range(min(self.max_length, len(smiles) - i), 0, -1): subseq = smiles[i:i+length] if subseq in self.vocab: tokens.append(self.vocab[subseq]) i += length matched = True break if not matched: tokens.append(self.vocab.get(smiles[i], self.vocab['<UNK>'])) i += 1 tokens.append(self.vocab.get('<SEP>')) return tokens class TransformerEncoder(nn.Module): def __init__(self, vocab_size, d_model=256, nhead=8, num_layers=6, max_length=512): super(TransformerEncoder, self).__init__() self.embedding = nn.Embedding(vocab_size, d_model) self.positional_encoding = nn.Parameter(torch.randn(1, max_length, d_model)) encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, nhead=nhead, batch_first=True) self.transformer = nn.TransformerEncoder(encoder_layer, num_layers=num_layers) self.output_dim = d_model def forward(self, x, mask=None): x = self.embedding(x) x = x + self.positional_encoding[:, :x.size(1), :] x = self.transformer(x, src_key_padding_mask=mask) return x class GraphAttentionLayer(nn.Module): def __init__(self, in_features, out_features, num_heads=4): super(GraphAttentionLayer, self).__init__() self.num_heads = num_heads self.out_features = out_features self.W = nn.Linear(in_features, out_features * num_heads, bias=False) self.a = nn.Linear(2 * out_features, 1, bias=False) self.leaky_relu = nn.LeakyReLU(0.2) def forward(self, x, adj): B, N, _ = x.shape h = self.W(x).view(B, N, self.num_heads, self.out_features) h_i = h.unsqueeze(2).repeat(1, 1, N, 1, 1) h_j = h.unsqueeze(1).repeat(1, N, 1, 1, 1) concat = torch.cat([h_i, h_j], dim=-1) e = self.leaky_relu(self.a(concat).squeeze(-1)) mask = adj.unsqueeze(-1).repeat(1, 1, 1, self.num_heads) e = e.masked_fill(mask == 0, float('-inf')) attention = F.softmax(e, dim=2) h_prime = torch.einsum('bijk,bjkl->bikl', attention, h) return h_prime.mean(dim=2) class AttentiveFP(nn.Module): def __init__(self, node_features, edge_features, hidden_dim=128, num_layers=3): super(AttentiveFP, self).__init__() self.node_embed = nn.Linear(node_features, hidden_dim) self.edge_embed = nn.Linear(edge_features, hidden_dim) self.gat_layers = nn.ModuleList([GraphAttentionLayer(hidden_dim, hidden_dim) for _ in range(num_layers)]) self.output_dim = hidden_dim def forward(self, node_features, edge_features, adj): h = self.node_embed(node_features) for gat in self.gat_layers: h = F.relu(gat(h, adj)) return h.mean(dim=1) class ASGEModel(nn.Module): def __init__(self, vocab_size, node_features, edge_features, num_tasks): super(ASGEModel, self).__init__() self.sequence_encoder = TransformerEncoder(vocab_size) self.graph_encoder = AttentiveFP(node_features, edge_features) fusion_dim = self.sequence_encoder.output_dim + self.graph_encoder.output_dim self.cross_attention = nn.MultiheadAttention(embed_dim=fusion_dim, num_heads=8, batch_first=True) self.decoder = nn.Sequential( nn.Linear(fusion_dim, 256), nn.ReLU(), nn.Dropout(0.2), nn.Linear(256, 128), nn.ReLU(), nn.Linear(128, num_tasks) ) def forward(self, seq_input, node_features, edge_features, adj, seq_mask=None): seq_features = self.sequence_encoder(seq_input, seq_mask) seq_pooled = seq_features.mean(dim=1) graph_features = self.graph_encoder(node_features, edge_features, adj) combined = torch.cat([seq_pooled, graph_features], dim=1) return self.decoder(combined) class DistanceEncoder(nn.Module): def __init__(self, max_distance=10.0, num_bins=50, embed_dim=64): super(DistanceEncoder, self).__init__() self.max_distance = max_distance self.num_bins = num_bins self.embedding = nn.Embedding(num_bins, embed_dim) def forward(self, distances): binned = torch.clamp((distances / self.max_distance * self.num_bins).long(), 0, self.num_bins - 1) return self.embedding(binned) class SpatialGraphNetwork(nn.Module): def __init__(self, node_features, hidden_dim=128, num_layers=4): super(SpatialGraphNetwork, self).__init__() self.node_embed = nn.Linear(node_features, hidden_dim) self.distance_encoder = DistanceEncoder() self.conv_layers = nn.ModuleList() for _ in range(num_layers): self.conv_layers.append(nn.Sequential( nn.Linear(hidden_dim + 64, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim) )) self.output_dim = hidden_dim def forward(self, node_features, distances, adj): h = self.node_embed(node_features) dist_embed = self.distance_encoder(distances) for conv in self.conv_layers: neighbor_features = torch.bmm(adj.float(), h) combined = torch.cat([neighbor_features, dist_embed.mean(dim=2)], dim=-1) h = h + conv(combined) return h.mean(dim=1) class MolecularFingerprintNetwork(nn.Module): def __init__(self, fingerprint_dim=2048, hidden_dim=256): super(MolecularFingerprintNetwork, self).__init__() self.network = nn.Sequential( nn.Linear(fingerprint_dim, 1024), nn.ReLU(), nn.Dropout(0.3), nn.Linear(1024, 512), nn.ReLU(), nn.Dropout(0.3), nn.Linear(512, hidden_dim) ) self.output_dim = hidden_dim def forward(self, fingerprints): return self.network(fingerprints) class ThreeDFGNNModel(nn.Module): def __init__(self, node_features, fingerprint_dim, num_tasks): super(ThreeDFGNNModel, self).__init__() self.spatial_gnn = SpatialGraphNetwork(node_features) self.fingerprint_net = MolecularFingerprintNetwork(fingerprint_dim) fusion_dim = self.spatial_gnn.output_dim + self.fingerprint_net.output_dim self.fusion = nn.Sequential( nn.Linear(fusion_dim, 256), nn.ReLU(), nn.Dropout(0.2), nn.Linear(256, 128), nn.ReLU(), nn.Linear(128, num_tasks) ) def forward(self, node_features, distances, adj, fingerprints): spatial_features = self.spatial_gnn(node_features, distances, adj) fp_features = self.fingerprint_net(fingerprints) combined = torch.cat([spatial_features, fp_features], dim=1) return self.fusion(combined) class AttentionVisualizer: def __init__(self, model): self.model = model self.attention_weights = {} def register_hooks(self): def get_attention(name): def hook(module, input, output): if hasattr(module, 'attention_weights'): self.attention_weights[name] = module.attention_weights return hook for name, module in self.model.named_modules(): if 'attention' in name.lower(): module.register_forward_hook(get_attention(name)) def visualize_node_importance(self, node_attention): importance = node_attention.mean(dim=0) normalized = (importance - importance.min()) / (importance.max() - importance.min() + 1e-8) return normalized.detach().cpu().numpy() def train_model(model, train_loader, val_loader, epochs, learning_rate, task_type='classification'): optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-5) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', patience=10) criterion = nn.BCEWithLogitsLoss() if task_type == 'classification' else nn.MSELoss() for epoch in range(epochs): model.train() train_loss = 0 for batch in train_loader: optimizer.zero_grad() outputs = model(*batch[:-1]) loss = criterion(outputs, batch[-1]) loss.backward() optimizer.step() train_loss += loss.item() model.eval() val_loss = 0 with torch.no_grad(): for batch in val_loader: outputs = model(*batch[:-1]) loss = criterion(outputs, batch[-1]) val_loss += loss.item() scheduler.step(val_loss) return model if __name__ == "__main__": vocab_size = 1000 node_features = 39 edge_features = 10 fingerprint_dim = 2048 num_tasks = 1 asge_model = ASGEModel(vocab_size, node_features, edge_features, num_tasks) thredf_gnn = ThreeDFGNNModel(node_features, fingerprint_dim, num_tasks) tokenizer = FrequentSubsequenceTokenizer()如有问题,可以直接沟通
👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇