ReChorus is a general PyTorch framework for Top-K recommendation with implicit feedback, especially for research purpose. It aims to provide a fair benchmark to compare different state-of-the-art algorithms. We hope this can partly alleviate the problem that different papers adopt different experimental settings, so as to form a "Chorus" of recommendation algorithms.
This framework is especially suitable for researchers to compare algorithms under the same experimental setting, and newcomers to get familar with classical methods. The characteristics of our framework can be summarized as follows:
- Easy: the framework is accomplished in less than a thousand lines of code, which is easy to use with clean codes and adequate comments
- Efficient: multi-thread batch preparation, special implementations for the evaluation, and around 90% GPU utilization during training for deep models
- Agile: concentrate on your model design in a single file and implement new models quickly
- Flexible: implement new readers or runners for different datasets and experimental settings, and each model can be assigned with specific helpers
Generally, ReChorus decomposes the whole process into three modules:
- Reader: read dataset into DataFrame and append necessary information to each instance
- Runner: control the training process and model evaluation
- Model: define how to generate ranking scores and prepare batches
- Install Anaconda with Python >= 3.5
- Clone the repository and install requirements
git clone https://github.com/THUwangcy/ReChorus.git cd ReChorus pip install -r requirements.txt
- Run model with build-in dataset
python main.py --model_name BPR --emb_size 64 --lr 1e-3 --lr 1e-6 --dataset Grocery_and_Gourmet_Food
- (optional) Run jupyter notebook in
datafolder to download and build new amazon datasets, or prepare your own datasets according to README in
- (optional) Implement your own models according to README in
We have implemented the following methods (still updating):
- BPR (UAI'09): Bayesian personalized ranking from implicit feedback
- NCF (WWW'17): Neural Collaborative Filtering
- Tensor (RecSys'10): N-dimensional Tensor Factorization for Context-aware Collaborative Filtering
- GRU4Rec (ICLR'16): Session-based Recommendations with Recurrent Neural Networks
- NARM (CIKM'17): Neural Attentive Session-based Recommendation
- SASRec (IEEE'18): Self-attentive Sequential Recommendation
- TiSASRec (WSDM'20): Time Interval Aware Self-Attention for Sequential Recommendation
- CFKG (SIGIR'18): Learning over Knowledge-Base Embeddings for Recommendation
- SLRC (WWW'19): Modeling Item-specific Temporal Dynamics of Repeat Consumption
- Chorus (SIGIR'20): Knowledge- and Time-aware Item Modeling for Sequential Recommendation
The table below lists the results of these models in
Grocery_and_Gourmet_Food dataset (145.8k entries). Leave-one-out is applied to split data: the most recent interaction of each user for testing, the second recent item for validation, and the remaining items for training. We randomly sample 99 negative items for each test case to rank together with the ground-truth item. These settings are all common in Top-K sequential recommendation.
|Model||[email protected]||[email protected]||Time/iter||Sequential||Knowledge||Time-aware|
For fair comparison, the batch size is fixed to 256, and the embedding size is set to 64. We strive to tune all the other hyper-parameters to obtain the best performance for each model (may be not optimal now, which will be updated if better scores are achieved). Current commands are listed in run.sh. We repeat each experiment 5 times with different random seeds and report the average score (see exp.py). All experiments are conducted with a single GTX-1080Ti GPU.
This is also our public implementation for the paper:
Chenyang Wang, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma. Make It a Chorus: Knowledge- and Time-aware Item Modeling for Sequential Recommendation. In SIGIR'20.
SIGIR20 branch to reproduce the results.
git clone -b SIGIR20 https://github.com/THUwangcy/ReChorus.git
Please cite this paper if you use our codes. Thanks!
Author: Chenyang Wang ([email protected])