Simple Keras model that tackles the Stanford Natural Language Inference (SNLI) corpus using summation and/or recurrent neural networks

Keras SNLI baseline example This repository contains a simple Keras baseline to train a variety of neural networks to tackle the Stanford Natural Language Inference (SNLI) corpus. The aim is to determine whether a premise senten

Related Repos



barissayil Sentiment analysis neural network trained by fine-tuning BERT, ALBERT, or DistilBERT on the Stanford Sentiment Treebank.
 

qingkongzhiqian 基于GPT2的中文摘要生成模型
 

EssayKillerBrain 基于开源GPT2.0的初代创作型人工智能 | 可扩展、可进化
 

EdinburghNLP OPUS-100 is an English-centric multilingual corpus covering 100 languages. It was randomly sampled from the OPUS collection
 

keras-team Layers are the fundamental building blocks for NLP models. They can be used to assemble new layers, networks, or models.
 

prajjwal1 A deep learning library based on Pytorch focussed on low resource language research and robustness
 

bojone 基于BERT的无监督分词和句法分析
 

MaartenGr BERTopic is a topic modeling technique that leverages BERT embeddings and c-TF-IDF to create dense clusters allowing for easily interpretable topics whilst keeping important words in the topic descriptions.