Crafted to present a concise history of GPT and AI-related advancements and the timeline

A brief history related to Generative Pre-trained Transformer technology. 'You can't connect the dots looking forward; you can only connect them looking backwards.' -- Steve Jobs

  • Published on

    Turing test, proposed to consider the question, ‘Can machines think?’

    A test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
  • Published on

    A proposal for the Dartmouth summer research project on Artificial Intelligence

    The artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.
  • Published on

    Term Deep Learning was proposed

    Learning while searching in constraint-satisfaction-problems.
  • Published on

    OpenAI was founded

    A U.S. based artificial intelligence research organization founded with the goal of developing "safe and beneficial" AI.
  • Published on

    Attention is All You Need

    Propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
  • Published on

    Improving language understanding with unsupervised learning

    GPT-1, a combination of two existing ideas transformers and unsupervised pre-training.
  • Published on

    NVIDIA released first RTX graphics card GeForce RTX 2080

    It adopts NVIDIA Turing architecture and supports real-time ray tracing technology.
  • Published on

    Fine-tuning GPT-2 from human preferences

    GPT-2, fine-tuned the 774M parameter GPT-2 language model using human feedback for various tasks.
  • Published on

    Language models are few-shot learners

    GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model.
  • Published on

    Aligning language models to follow instructions

    GPT-3.5, trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic.
  • Published on

    DALL·E 2 is an AI system that can create realistic images and art from a description in natural language.

    DALL·E 2, generates more realistic and accurate images with 4x greater resolution.
  • Published on

    LangChain was initially committed.

    LangChan is a framework for developing applications powered by language models.
  • Published on

    Introducing ChatGPT

    A model interacts in a conversational way and is possible to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
  • Published on

    LLaMA, a foundational 65-billion-parameter large language model

    Large Language Model Meta AI, designed to help researchers advance their work in this subfield of AI
  • Published on

    LLM Powered Autonomous Agents

    In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components planning, memory, tool use
  • Published on

    Meta and Microsoft Introduce the Next Generation of Llama

    The next generation of our open source large language model is free for research and commercial use.
  • Published on

    Introducing GPTs

    Now create your custom version of ChatGPT for a specific purpose—no coding is required-and then share that creation.
  • Published on

    Introducing the GPT Store

    GPT Store helps you include your GPT in the store and find useful and popular custom versions of ChatGPT.
  • Published on

    NVIDIA Chat with RTX

    Your personalized AI chatbot connected to your own content with RTX.
  • Published on

    Sora, an ai model that can create video from text

    An AI model that can create realistic and imaginative scenes from text instructions.