Published onJune 23, 2023LLM Powered Autonomous AgentsGPTresearchIn a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components planning, memory, tool use
Published onJanuary 27, 2022Aligning language models to follow instructionsGPT-3.5openairesearchGPT-3.5, trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic.
Published onMay 28, 2020Language models are few-shot learnersGPT-3openairesearchGPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model.
Published onSeptember 19, 2019Fine-tuning GPT-2 from human preferencesGPT-2openairesearchGPT-2, fine-tuned the 774M parameter GPT-2 language model using human feedback for various tasks.
Published onJune 11, 2018Improving language understanding with unsupervised learningGPT-1openairesearchGPT-1, a combination of two existing ideas transformers and unsupervised pre-training.
Published onJune 12, 2017Attention is All You NeedresearchgooglebrainPropose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.