Models API¶
The modeling module provides neural network architectures and training functions for variational autoencoders (VAEs).
Overview¶
This module includes:
- VAE architectures (standard, conditional, simple)
- Training and evaluation functions
- Loss functions (reconstruction, KL divergence)
- Checkpoint management
- Post-processing networks
Model Architectures¶
VAE¶
Standard Variational Autoencoder with encoder-decoder architecture.
VAE ¶
Bases: Module
Variational Autoencoder (VAE).
Standard VAE implementation with encoder-decoder architecture and reparameterization trick for sampling from the latent space.
Args: input_dim: Dimension of input data (number of genes) mid_dim: Dimension of hidden layer features: Dimension of latent space output_layer: Output activation function (default: nn.ReLU)
Source code in renalprog/modeling/train.py
Functions¶
forward ¶
Forward pass through VAE.
Args: x: Input data (batch_size, input_dim)
Returns: Tuple of (reconstruction, mu, log_var, z)
Source code in renalprog/modeling/train.py
reparametrize ¶
Reparameterization trick: sample from N(mu, var) using N(0,1).
Args: mu: Mean of the latent distribution log_var: Log variance of the latent distribution
Returns: Sampled latent vector
Source code in renalprog/modeling/train.py
Example Usage:
import torch
from renalprog.modeling.train import VAE
# Create VAE model
model = VAE(
input_dim=20000, # Number of genes
mid_dim=1024, # Hidden layer size
features=128, # Latent dimension
dropout=0.1
)
# Forward pass
x = torch.randn(32, 20000) # Batch of gene expression
reconstruction, mu, log_var, z = model(x)
CVAE¶
Conditional VAE that incorporates clinical covariates.
CVAE ¶
Bases: VAE
Conditional Variational Autoencoder.
VAE that conditions on additional information (e.g., clinical data).
Args: input_dim: Dimension of input data mid_dim: Dimension of hidden layer features: Dimension of latent space num_classes: Number of condition classes output_layer: Output activation function
Source code in renalprog/modeling/train.py
Functions¶
forward ¶
forward(
x: Tensor, condition: Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
Forward pass through CVAE.
Args: x: Input data condition: Conditioning information (one-hot encoded)
Returns: Tuple of (reconstruction, mu, log_var, z)
Source code in renalprog/modeling/train.py
Example Usage:
from renalprog.modeling.train import CVAE
# Create conditional VAE
model = ConditionalVAE(
input_dim=20000,
mid_dim=1024,
features=128,
condition_dim=2, # e.g., one-hot encoded stage
dropout=0.1
)
# Forward pass with condition
x = torch.randn(32, 20000)
condition = torch.randn(32, 2) # Clinical covariates
reconstruction, mu, log_var, z = model(x, condition)
AE¶
Simplified autoencoder without variational component.
AE ¶
Bases: Module
Standard Autoencoder (without variational inference).
Similar architecture to VAE but without reparameterization trick.
Args: input_dim: Dimension of input data mid_dim: Dimension of hidden layer features: Dimension of latent space output_layer: Output activation function
Source code in renalprog/modeling/train.py
Functions¶
forward ¶
Forward pass through AE.
Args: x: Input data
Returns: Tuple of (reconstruction, None, None, z) None values for mu and logvar to maintain consistency with VAE
Source code in renalprog/modeling/train.py
Loss Functions¶
vae_loss¶
Complete VAE loss combining reconstruction and KL divergence.
vae_loss ¶
vae_loss(
reconstruction: Tensor,
x: Tensor,
mu: Tensor,
log_var: Tensor,
beta: float = 1.0,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
Calculate VAE loss: reconstruction loss + KL divergence.
Args: reconstruction: Reconstructed output x: Original input mu: Mean of latent distribution log_var: Log variance of latent distribution beta: Weight for KL divergence term (beta-VAE)
Returns: Tuple of (total_loss, reconstruction_loss, kl_divergence)
Source code in renalprog/modeling/train.py
reconstruction_loss¶
MSE-based reconstruction loss.
reconstruction_loss ¶
Calculate reconstruction loss (MSE).
Args: reconstruction: Reconstructed output x: Original input reduction: Reduction method ('sum' or 'mean')
Returns: Reconstruction loss
Source code in renalprog/modeling/train.py
kl_divergence¶
KL divergence between latent distribution and prior.
kl_divergence ¶
Calculate KL divergence between approximate posterior and prior.
Args: mu: Mean of approximate posterior log_var: Log variance of approximate posterior
Returns: KL divergence
Source code in renalprog/modeling/train.py
Training Functions¶
train_vae¶
Main training function for VAE models.
train_vae ¶
train_vae(
X_train: ndarray,
X_test: ndarray,
y_train: Optional[ndarray] = None,
y_test: Optional[ndarray] = None,
config: Optional[VAEConfig] = None,
save_dir: Optional[Path] = None,
resume_from: Optional[Path] = None,
force_cpu: bool = False,
) -> Tuple[nn.Module, Dict[str, list]]
Train a VAE model with full checkpointing support.
Args: X_train: Training data (samples × features) - numpy array or pandas DataFrame X_test: Test data (samples × features) - numpy array or pandas DataFrame y_train: Optional training labels for CVAE y_test: Optional test labels for CVAE config: Training configuration save_dir: Directory to save checkpoints resume_from: Optional checkpoint path to resume training force_cpu: Force CPU usage even if CUDA is available (for compatibility)
Returns: Tuple of (trained_model, training_history)
Source code in renalprog/modeling/train.py
541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 | |
Example Usage:
from renalprog.modeling.train import train_vae
from pathlib import Path
import pandas as pd
# Load training data
train_expr = pd.read_csv("data/interim/split/train_expression.tsv", sep="\t", index_col=0)
test_expr = pd.read_csv("data/interim/split/test_expression.tsv", sep="\t", index_col=0)
# Train VAE
history, best_model, checkpoints = train_vae(
train_data=train_expr.values,
val_data=test_expr.values,
input_dim=train_expr.shape[1],
mid_dim=1024,
features=128,
output_dir=Path("models/my_vae"),
n_epochs=100,
batch_size=32,
learning_rate=1e-3,
use_scheduler=True,
use_checkpoint=True,
early_stopping_patience=20
)
print(f"Final validation loss: {history['val_loss'][-1]:.4f}")
train_epoch¶
Train the model for one epoch.
train_epoch ¶
train_epoch(
model: Module,
dataloader: DataLoader,
optimizer: Optimizer,
device: str,
config: VAEConfig,
beta: Optional[float] = None,
) -> Dict[str, float]
Train model for one epoch.
Args: model: VAE model dataloader: Training DataLoader optimizer: Optimizer device: Device to use config: Training configuration beta: Beta value for this epoch (if None, uses config.BETA)
Returns: Dictionary with loss metrics
Source code in renalprog/modeling/train.py
evaluate_model¶
Evaluate model on validation/test data.
evaluate_model ¶
evaluate_model(
model: Module,
dataloader: DataLoader,
device: str,
config: VAEConfig,
beta: Optional[float] = None,
) -> Dict[str, float]
Evaluate model on validation/test set.
Args: model: VAE model dataloader: Validation DataLoader device: Device to use config: Training configuration beta: Beta value for this epoch (if None, uses config.BETA)
Returns: Dictionary with loss metrics
Source code in renalprog/modeling/train.py
train_vae_with_postprocessing¶
Train VAE and post-processing network together.
train_vae_with_postprocessing ¶
train_vae_with_postprocessing(
X_train: ndarray,
X_test: ndarray,
vae_config: Optional[VAEConfig] = None,
reconstruction_network_dims: Optional[List[int]] = None,
reconstruction_epochs: int = 200,
reconstruction_lr: float = 0.0001,
batch_size_reconstruction: int = 8,
save_dir: Optional[Path] = None,
force_cpu: bool = False,
) -> Tuple[nn.Module, nn.Module, Dict[str, list], Dict[str, list]]
Train VAE followed by postprocessing network (full pipeline).
This implements the complete training pipeline as in train_vae.sh: 1. Train VAE on gene expression data 2. Get VAE reconstructions 3. Train NetworkReconstruction to adjust VAE output
Args: X_train: Training data (numpy array or pandas DataFrame) X_test: Test data (numpy array or pandas DataFrame) vae_config: VAE configuration reconstruction_network_dims: Architecture for reconstruction network If None, defaults to [input_dim, 4096, 1024, 4096, input_dim] reconstruction_epochs: Epochs for training reconstruction network reconstruction_lr: Learning rate for reconstruction network save_dir: Directory to save models force_cpu: Force CPU usage
Returns: Tuple of (vae_model, reconstruction_network, vae_history, reconstruction_history)
Source code in renalprog/modeling/train.py
881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 | |
Utility Functions¶
create_dataloader¶
Create PyTorch DataLoader from numpy arrays.
create_dataloader ¶
create_dataloader(
X: ndarray,
y: Optional[ndarray] = None,
batch_size: int = 32,
shuffle: bool = True,
) -> torch.utils.data.DataLoader
Create DataLoader with MinMax normalization.
Args: X: Input data (samples x features) y: Optional labels batch_size: Batch size shuffle: Whether to shuffle data
Returns: DataLoader
Source code in renalprog/modeling/train.py
frange_cycle_linear¶
Generate cyclical annealing schedule for KL divergence.
frange_cycle_linear ¶
frange_cycle_linear(
start: float,
stop: float,
n_epoch: int,
n_cycle: int = 4,
ratio: float = 0.5,
) -> np.ndarray
Generate a linear cyclical schedule for beta hyperparameter.
This creates a cyclical annealing schedule where beta increases linearly from start to stop over a portion of each cycle (controlled by ratio), then stays constant at stop for the remainder of the cycle.
Args: start: Initial value of beta (typically 0.0) stop: Final/maximum value of beta (typically 1.0) n_epoch: Total number of epochs n_cycle: Number of cycles (default: 4) ratio: Ratio of cycle spent increasing beta (default: 0.5) - 0.5 means half cycle increasing, half constant - 1.0 means entire cycle increasing
Returns: Array of beta values for each epoch
Example: >>> # 3 cycles over 300 epochs, beta increases from 0 to 1 over first half of each cycle >>> beta_schedule = frange_cycle_linear(0.0, 1.0, 300, n_cycle=3, ratio=0.5) >>> # Epoch 0-50: beta increases 0.0 -> 1.0 >>> # Epoch 50-100: beta stays at 1.0 >>> # Epoch 100-150: beta increases 0.0 -> 1.0 >>> # Epoch 150-200: beta stays at 1.0 >>> # Epoch 200-250: beta increases 0.0 -> 1.0 >>> # Epoch 250-300: beta stays at 1.0
Source code in renalprog/modeling/train.py
Example Usage:
from renalprog.modeling.train import frange_cycle_linear
# Create annealing schedule
schedule = frange_cycle_linear(
n_iter=1000,
start=0.0,
stop=1.0,
n_cycle=4,
ratio=0.5
)
# Use in training loop
for i, beta in enumerate(schedule):
loss = reconstruction_loss + beta * kl_loss
Post-Processing Network¶
NetworkReconstruction¶
Neural network for refining VAE reconstructions.
NetworkReconstruction ¶
Bases: Module
Deep neural network to adjust VAE reconstruction.
This network is trained on top of VAE output to improve reconstruction quality by learning a mapping from VAE reconstruction to original data.
Args: layer_dims: List of layer dimensions [input_dim, hidden1, hidden2, ..., output_dim]
Source code in renalprog/modeling/train.py
train_reconstruction_network¶
Train post-processing network.
train_reconstruction_network ¶
train_reconstruction_network(
network: Module,
vae_reconstructions: DataFrame,
original_data: DataFrame,
train_indices: List,
test_indices: List,
epochs: int = 200,
lr: float = 0.0001,
batch_size: int = 32,
device: str = "cpu",
) -> Tuple[nn.Module, List[float], List[float]]
Train reconstruction network to adjust VAE output.
Args: network: NetworkReconstruction model vae_reconstructions: DataFrame with VAE reconstructions (samples x genes) original_data: DataFrame with original gene expression (samples x genes) train_indices: List of training sample indices test_indices: List of test sample indices epochs: Number of training epochs lr: Learning rate batch_size: Batch size device: Device to use
Returns: Tuple of (trained_network, train_losses, test_losses)
Source code in renalprog/modeling/train.py
764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 | |
See Also¶
- Training API - Complete training pipeline
- Prediction API - Using trained models
- Configuration - Model hyperparameters