Tutorial code on how to build your own Deep Learning System in 2k Lines

TinyFlow: Build Your Own DL System in 2K Lines TinyFlow is "example code" for NNVM. It demonstrates how can we build a clean, minimum and powerful computational graph based deep learning system with same API as TensorFlow. The o

Related Repos



yixuan tinydnn Introduction tinydnn is an (experimental) R wrapper of the tiny-dnn library for implementing Deep Neural Networks (DNN). The largest advantage of tiny-dnn over other deep learning frameworks is its minimal depe
 

mldbai MLDB is the Machine Learning Database by MLDB.ai MLDB is an open-source database designed for machine learning. You can install it wherever you want and send it commands over a RESTful API to store data, explore it u
 

nenadmarkus Combining neural networks and decision trees This repo contains a demo for the following technical report (arXiv): @misc { nets-and-trees, author = {Nenad Marku\v{s} and Ivan Gogi\'c and Igor S. Pand\v{z}i\'c and J\"{o}rgen Ah
 

Artelnics OpenNN OpenNN is a software library written in C++ for advanced analytics. It implements neural networks, the most successful machine learning method. The main advantage of OpenNN is its high performance. This library outstands
 

facebookresearch Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed by Facebook AI Research.
 

MichaelJWelsh Yannl Yannl is a compact, portable, feed-forward artificial neural network library written in C++11. Yannl has no dependencies but can be easily optimized for matrix multiplication via callbacks (see integrating BLAS example for
 

griegler OctNet uses efficient space partitioning structures (i.e. octrees) to reduce memory and compute requirements of 3D convolutional neural networks, thereby enabling deep learning at high resolutions. This is the code for the paper: OctNet
 

facebookarchive TorchMPI TorchMPI provides a simple abstraction for distributing training of Torch neural network models on multi-node multi-GPU clusters. We support (a)synchronous data-parallel SGD, model-parallel SGD, CPU-side parameter server