3-ai-terms

~5 m

AI Terms Glossary

Agentic AI
AI systems that can autonomously plan, make decisions, and take actions to achieve specific goals without constant human intervention.

Agentic AI represents a significant evolution beyond traditional AI systems by incorporating goal-directed behavior, autonomous decision-making, and the ability to interact with external tools and environments. These systems can break down complex tasks into subtasks, create action plans, execute those plans using available resources, and adapt their strategies based on feedback. Unlike reactive AI that simply responds to inputs, agentic AI maintains context across interactions, learns from outcomes, and can even request additional information or resources when needed to accomplish objectives.

Large Reasoning Model (LRM)
Advanced AI models specifically designed to perform complex multi-step reasoning and problem-solving tasks that require logical thinking and strategic planning.

Large Reasoning Models extend beyond pattern recognition and text generation to focus on systematic thinking and logical deduction. These models are trained to decompose complex problems, maintain coherent chains of thought, verify intermediate steps, and arrive at well-reasoned conclusions. They excel at mathematical proofs, scientific reasoning, strategic planning, and debugging complex systems. Unlike standard language models that may hallucinate or make logical leaps, LRMs are optimized to show their work, validate assumptions, and acknowledge uncertainty when appropriate.

Vector Database
A specialized database system designed to store, index, and efficiently search high-dimensional vector embeddings that represent semantic meaning of text, images, or other data.

Vector databases solve the challenge of finding semantically similar information in large datasets by converting data into numerical vectors (embeddings) that capture meaning in high-dimensional space. When you search a vector database, it finds items whose vectors are closest to your query vector, enabling semantic search that understands context and meaning rather than just matching keywords. This technology is fundamental to modern AI applications, powering recommendation systems, similarity search, and enabling AI models to access relevant information from vast knowledge bases efficiently.

RAG (Retrieval Augmented Generation)
A technique that enhances AI language models by retrieving relevant information from external knowledge sources before generating responses, combining the benefits of search and generation.

RAG addresses the limitations of AI models being constrained to their training data by giving them access to up-to-date, domain-specific, or proprietary information. When a query is received, the system first searches a knowledge base (often using vector databases) to find relevant documents or passages, then provides these as context to the language model when generating a response. This approach reduces hallucinations, enables models to cite sources, allows for easy knowledge updates without retraining, and makes it possible to work with specialized information that wasn’t in the model’s training data.

MCP (Model Context Protocol)
A standardized communication protocol that enables AI models to securely access and interact with external data sources, tools, and services in a consistent way.

Model Context Protocol provides a unified framework for AI applications to connect with various external resources such as databases, APIs, file systems, and business tools without requiring custom integration code for each connection. It defines how models can discover available resources, request access to data, execute actions through tools, and maintain security boundaries. This standardization is crucial for building agentic AI systems that need to interact with enterprise systems, as it ensures consistent behavior, proper authentication, and controlled access to sensitive resources across different platforms and services.

MoE (Mixture of Experts)
A neural network architecture that uses multiple specialized sub-models (experts) with a routing mechanism that selectively activates only the most relevant experts for each input.

Mixture of Experts models achieve efficiency by dividing a large model into smaller, specialized components where each expert focuses on different types of patterns or domains. A gating network learns to route each input to the most appropriate experts, activating only a subset of the total parameters for any given task. This approach allows for much larger total model capacity while keeping computational costs manageable, since only a fraction of the model processes each input. The architecture is particularly valuable for handling diverse tasks or domains within a single model, as different experts can specialize in areas like mathematics, coding, creative writing, or specific languages.

ASI (Artificial Super Intelligence)
A hypothetical future form of artificial intelligence that would surpass human cognitive abilities across all domains, including creativity, problem-solving, and social intelligence.

Artificial Super Intelligence represents a theoretical milestone where AI systems would not just match but fundamentally exceed human-level intelligence in every meaningful way. Unlike narrow AI that excels at specific tasks or even general AI that matches human-level versatility, ASI would possess superior reasoning, learning speed, memory, creativity, and the ability to improve itself recursively. This concept remains firmly in the realm of speculation and future possibility, with significant debate among experts about whether ASI is achievable, what timeline might be realistic, and what safety measures would be necessary. The discussion around ASI drives important conversations about AI alignment, control mechanisms, and the long-term implications of increasingly capable AI systems.

LLM (Large Language Model)
Neural networks trained on vast amounts of text data to understand and generate human-like language, capable of tasks ranging from conversation to code generation.

Large Language Models represent a breakthrough in AI that emerged from scaling up transformer-based neural networks and training them on diverse text from the internet, books, and other sources. These models learn statistical patterns in language that enable them to generate coherent, contextually appropriate text, answer questions, summarize documents, translate languages, write code, and perform many other language-based tasks. The “large” refers to billions or even trillions of parameters that capture nuanced patterns in how language works. While LLMs can appear remarkably intelligent, they fundamentally operate by predicting likely text continuations based on patterns learned during training, rather than truly understanding meaning in the way humans do.