torchkit is a lightweight library containing PyTorch utilities useful for day-to-day research. Its main goal is to abstract away a lot of the redundant boilerplate associated with research projects like experimental configurations, logging and model checkpointing. It consists of:
|| A wrapper around Tensorboard's
|| A port of Tensorflow's checkpoint management tools containing:
|| A set of commonly used layers in research papers not available in vanilla PyTorch like "same" and "causal" convolution and
||Featurizers are convenience wrappers around PyTorch ResNets and InceptionNets, giving you fine-grained control over what layer to use as output, how to fine-tune them and how to update batch norm statistics. Featurizers can be easily extended to any model of choice.|
||Some useful loss functions also unavailable in vanilla PyTorch like cross entropy with label smoothing and Huber loss.|
||A bunch of useful methods for file/folder manipulation, video frame and audio processing as well as PyTorch-related helper functions.|
For more details about each module, see the documentation.
Clone the repo and install it in editable mode:
git clone https://github.com/kevinzakka/torchkit.git cd torchkit pip install -e .
- I learned a ton about experiment management by re-implementing Debidatta Dwibedi's open-source implementation of Temporal Cycle Consistency Learning.
- Thanks to Karan Desai's VirTex which I used to figure out documentation-related setup for torchkit and for just being an excellent example of stellar open-source research release.
- Thanks to Brent Yi for encouraging me to use type hinting and for letting me use his awesome Bayesian filtering library's README as a template.