Multilingual T5 (mT5) is a massively multilingual pretrained text-to-text transformer model, trained following a similar recipe as T5. This repo can be used to reproduce the experiments in the mT5 paper.
In this repository we release models from the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale that were pre-trained on the ImageNet-21k (imagenet21k) dataset. We provide the code for fine-tuning th
Scalene is a high-performance CPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than other profilers while delivering far more deta
This repository contains the source code and trained model for a large-scale pretrained dialogue response generation model. The human evaluation results indicate that the response generated from DialoGPT is comparable to human res
Gkroam is a lightweight Roam Research replica, built on top of emacs org-mode. It uses ‘ripgrep’ to search links on pages and insert references at the bottom of org pages automatically. Gkroam imitates roam research in as many asp
MyVision is a free online image annotation tool used for generating computer vision based ML training data. It is designed with the user in mind, offering features to speed up the labelling process and help maintain workflows with
After two successful testnets in 2020 and our mainnet launch within sight, we are happy to announce the launch of Concordium Testnet 3 and to invite testers, developers, and users all over the world to compete to earn up to 10 mil
This repo contains human readable source codes for VIRGO 1302 demoscene 4k intro, and the build pipeline used to produce the final entry file. It placed 1st in Assembly 2020 4k intro competition. You can view a prerendered version