Skip to content

JuliaWolleb/VidFuncta_public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VidFuncta

This repo contains the official Pytorch implementation of the paper VidFuncta: Towards Generalizable Neural Representations for Ultrasound Videos. The paper can be found here.

Data

-The Echonet Dynamic dataset can be downloaded here

-The Breast Ultrasound Video dataset can be downloaded under this link

-The BEDLUS dataset of lung ultrasound videos can be required here

A mini-example how the data needs to be stored can be found in the folder data.

Training of the Meta-Model

Training configurations are stored in the folder configs/experiments.

  • For the training of the our VidFuncta approach on the lung dataset, run python3 train.py --config ./configs/experiments/2d_imgs/lung.yaml .
  • For the training of the 3D approach, run python3 train.py --config ./configs/experiments/3d_imgs/lung_3d.yaml .
  • For the training of the spatial approach, run python3 train_spatial.py --config ./configs/experiments/2d_imgs/lung_spatial.yaml .

The trained models will be stored in a folder logs. You can replace "lung" by "cardiac" or "breast" if you want to train on the other datasets.

Inference and saving of the modulation vectors

  • To store the modulations and reconstruct the videos, run python3 rescontruct.py --config ./configs/reconstruct/lung_reconstruct.yaml

In the yaml file, you need to adapt the path to the right model in the logs folder. The output will be stored in a folder called reconstructions.

Detailed Results

The first row shows the model input, the second rew the reconstructed videos using a model trained on the mixed dataset. Below, we show visualizations of the modulation vectors of two cardiac datasets. The time dimension is shown in the y-axis, while the x-axis shows the length of the modulation vectors.

Modulation vectors $\phi$ for 2 samples of the cardiac dataset

Comparing Methods

PocovidNet

We adapted the code repository available here, using the video classification approach.

Res2+1D

The Res2+1D architecture was adopted from the torchvision video model implementation.

MedFuncta

This Github repository was based on MedFuncta available here.

Spatial Functa

We followed the description in the paper Spatial Functa to extract modulation vectors of dimension 4x4x64.

About

Official implementation of our paper VidFuncta.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages