Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxiliary data (e.g., item descriptive text and image, social network, etc). Cornac enables fast experiments and straightforward implementations of new models. It is highly compatible with existing machine learning libraries (e.g., TensorFlow, PyTorch).
Currently, we are supporting Python 3. There are several ways to install Cornac:
- From PyPI (you may need a C++ compiler):
pip3 install cornac
- From Anaconda:
conda install cornac -c conda-forge
- From the GitHub source (for latest updates):
pip3 install Cython git clone https://github.com/PreferredAI/cornac.git cd cornac python3 setup.py install
Additional dependencies required by models are listed here.
Some algorithm implementations use
OpenMP to support multi-threading. For OSX users, in order to run those algorithms efficiently, you might need to install
gcc from Homebrew to have an OpenMP compiler:
brew install gcc | brew link gcc
If you want to utilize your GPUs, you might consider:
Getting started: your first Cornac experiment
Flow of an Experiment in Cornac
Load the built-in MovieLens 100K dataset (will be downloaded if not cached):
import cornac ml_100k = cornac.datasets.movielens.load_feedback(variant="100K")
Split the data based on ratio:
rs = cornac.eval_methods.RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)
Here we are comparing
mf = cornac.models.MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123) pmf = cornac.models.PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123) bpr = cornac.models.BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123)
Define metrics used to evaluate the models:
mae = cornac.metrics.MAE() rmse = cornac.metrics.RMSE() prec = cornac.metrics.Precision(k=10) recall = cornac.metrics.Recall(k=10) ndcg = cornac.metrics.NDCG(k=10) auc = cornac.metrics.AUC() mAP = cornac.metrics.MAP()
Put everything together into an experiment and run it:
cornac.Experiment( eval_method=rs, models=[mf, pmf, bpr], metrics=[mae, rmse, recall, ndcg, auc, mAP], user_based=True ).run()
|MAE||RMSE||AUC||MAP||[email protected]||[email protected]||[email protected]||Train (s)||Test (s)|
The recommender models supported by Cornac are listed below. Why don't you join us to lengthen the list?
Your contributions at any level of the library are welcome. If you intend to contribute, please:
- Fork the Cornac repository to your own account.
- Make changes and create pull requests.
You can also post bug reports and feature requests in GitHub issues.