- Seattle, WA
- lambdaviking.com
- @lambdaviking
Stars
An opinionated list of awesome Python frameworks, libraries, software and resources.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Models and examples built with TensorFlow
Rich is a Python library for rich text and beautiful formatting in the terminal.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
⚡ A Fast, Extensible Progress Bar for Python and CLI
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Python Fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object.
Code for the paper "Language Models are Unsupervised Multitask Learners"
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
🤗 The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation tools
An open-source NLP research library, built on PyTorch.
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Modeling, training, eval, and inference code for OLMo
Progressive Growing of GANs for Improved Quality, Stability, and Variation
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
A tool for extracting plain text from Wikipedia dumps
A library for Multilingual Unsupervised or Supervised word Embeddings
LSTM and QRNN Language Model Toolkit for PyTorch
Deep learning with dynamic computation graphs in TensorFlow
A PyTorch implementation of the NIPS 2017 paper "Dynamic Routing Between Capsules".
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM


