torchkit
torchkit
is a lightweight library containing PyTorch utilities useful for day-to-day research. Its main goal is to abstract away a lot of the redundant boilerplate associated with research projects like experimental configurations, logging and model checkpointing. It consists of:
torchkit.Logger |
A wrapper around Tensorboard's SummaryWriter for safe logging of losses, learning rates and metrics. |
torchkit.checkpoint |
A port of Tensorflow's checkpoint management tools containing:
|
torchkit.layers |
A set of commonly used layers in research papers not available in vanilla PyTorch like "same" and "causal" convolution and SpatialSoftArgmax . |
torchkit.featurizers |
Featurizers are convenience wrappers around PyTorch ResNets and InceptionNets, giving you fine-grained control over what layer to use as output, how to fine-tune them and how to update batch norm statistics. Featurizers can be easily extended to any model of choice. |
torchkit.losses |
Some useful loss functions also unavailable in vanilla PyTorch like cross entropy with label smoothing and Huber loss. |
torchkit.utils |
A bunch of useful methods for file/folder manipulation, video frame and audio processing as well as PyTorch-related helper functions. |
For more details about each module, see the documentation.
Installation
Clone the repo and install it in editable mode:
git clone https://github.com/kevinzakka/torchkit.git
cd torchkit
pip install -e .
Examples
Coming soon...
Acknowledgments
- I learned a ton about experiment management by re-implementing Debidatta Dwibedi's open-source implementation of Temporal Cycle Consistency Learning.
- Thanks to Karan Desai's VirTex which I used to figure out documentation-related setup for torchkit and for just being an excellent example of stellar open-source research release.
- Thanks to Brent Yi for encouraging me to use type hinting and for letting me use his awesome Bayesian filtering library's README as a template.