The AI Glossary

A comprehensive reference of AI terminology and concepts

Easy to share.

1-bit LLM

A specialized type of LLM that uses only 1 bit (0 or 1) to store each weight parameter, instead of the traditional 32 or 16 bits, dramatically reducing the model size. This enables LLMs to run on smaller devices like phones, while maintaining comparable performance to traditional models through architectural optimizations.

Agent-shoring

A term for the practice of outsourcing tasks to agents. Like offshoring or nearshoring, but for AI agents.

Back Propagation

A technique that's essential during the training of LLMs for adjusting the internal settings, or "weights," the network uses. Essentially, it involves figuring out how much correction is needed by checking the error at the output and then sending this correction back through the network to improve accuracy.

ChatGPT

An AI chatbot developed by OpenAI that uses LLMs to generate human-like text responses. It can answer questions, write content, assist with coding, and engage in conversation across a wide range of topics and tasks.

Claude

An AI chatbot developed by Anthropic that uses LLMs to generate human-like text responses. It can answer questions, write content, assist with coding, and engage in conversation across a wide range of topics and tasks.

Cursor

A VSCode port from Anysphere that includes tooling for writing software with the assistance of AI.

Eval - Evaluation

A set of tasks used to measure the quality of AI model outputs. Evals help ensure applications are stable and resilient to model changes, by comparing model outputs against ideal answers or using other models to grade responses. Similar to unit testing in traditional software development.

Generative Models

Algorithms that learn to generate data based on the data they are trained on. This can be text, images, audio, video, or other data types. The most well known generative models are the ones that generate text Claude, Perplexity and ChatGPT.

GPT - Generative Pre-trained Transformer

Learn more →

A type of large language model that uses the transformer architecture and is trained first on a large corpus of text (pre-trained) and then fine-tuned for specific tasks. The fundamental technology behind AI models like Claude, Perplexity and ChatGPT are GPT models.

Latent Space

In generative AI latent space is a lower-dimensional space that data is mapped to. This allows for representation and generation of complex data to be efficient.

LLM - Large Language Model

Learn more →

An artificial intelligence model trained on vast amounts of text data that can understand, generate, and manipulate human language. LLMs use deep learning techniques, particularly transformer architectures, to process and generate text based on input prompts.

MCP - Model Context Protocol

Learn more →

A standardized protocol for interacting with AI models that defines how to structure prompts, handle responses, and manage context windows effectively. MCPs help create consistent and reliable interactions with language models across different applications.

Perplexity

An AI-powered search engine and information assistant. It combines LLM capabilities with real-time web searching to provide responses with cited sources.

Roo Code

A VSCode extension for software development. It uses autonomous task execution and reasoning. It allows developers to create develop code by writing prompts rather than code.

Windsurf

A VSCode port from codeium that includes tooling for writing software with the assistance of AI.