Glossary
AI, Learning Experience Design & Knowledge Management
This glossary is a practical reference for professionals working in or around learning experience design, knowledge management, and AI. Whether you are an instructional designer exploring AI tools, a product manager building training systems, or a leader evaluating how AI can improve your team’s performance, these definitions give you a shared vocabulary for the concepts that matter most.
Terms are organized into three categories. Within each category, terms appear in alphabetical order. Each definition includes a category label and, where relevant, cross-references to related terms in the glossary. These connections help you see how concepts relate to each other across disciplines.
The principles, frameworks, and practices of designing learning experiences that respect how people think, feel, and grow.
(LXD)
The practice of designing learning experiences that all people can use, including those with disabilities. The Web Content Accessibility Guidelines (WCAG) provide the international standard, with Level AA compliance as the most common target. In practice, this means ensuring captions on video, sufficient color contrast, keyboard navigation, and screen reader compatibility across all training materials.See also: Universal Design for Learning (UDL)
(LXD)
A five-phase instructional design framework: Analysis, Design, Development, Implementation, and Evaluation. Each phase informs the next, creating a structured path from identifying learning needs to measuring outcomes. While sometimes criticized as linear, ADDIE remains the most widely referenced framework in corporate and academic learning design.See also: SAM Model
(LXD)
An instructional design approach that starts with defining desired outcomes and assessments before planning learning activities. Rather than asking what content to cover, backward design asks what learners should be able to do at the end, then works in reverse to build the path. This approach prevents scope creep and keeps training focused on measurable results.See also: Learning Objectives
(LXD)
A classification system for cognitive learning objectives organized into six levels: Remember, Understand, Apply, Analyze, Evaluate, and Create. Each level represents increasing complexity of thought. Trainers use Bloom’s to write learning objectives that target the right depth of understanding for a given audience and task.See also: Learning Objectives
(LXD)
A framework explaining that working memory has limited capacity, and instructional design should manage how much mental effort learners expend. Three types of cognitive load exist: intrinsic (complexity of the content), extraneous (caused by poor design), and germane (productive effort toward learning). Reducing extraneous load through clear layouts, chunked content, and progressive disclosure is a core principle of empathy-driven design.See also: Scaffolding; Microlearning
(LXD)
Low-stakes evaluation conducted during the learning process to check understanding and guide instruction. Examples include knowledge checks, polling questions, practice exercises, and peer feedback. Formative assessments help both learners and facilitators identify gaps before they become problems, making them essential to confidence-building in onboarding programs.See also: Summative Assessment
(LXD)
A performance support tool designed to be used at the point of need rather than memorized. Job aids take many forms: checklists, quick reference cards, flowcharts, and decision trees. They reduce reliance on memory and are especially valuable during the first weeks of a new role, when cognitive load is highest and confidence is still developing.See also: Knowledge Base
(LXD)
A four-level framework for evaluating training effectiveness: Reaction (did learners find it valuable?), Learning (did knowledge or skills change?), Behavior (are learners applying what they learned?), and Results (did business outcomes improve?). Most organizations measure Levels 1 and 2 consistently but struggle to connect training to Level 4 business results without intentional measurement design.
(LXD)
A research-based profile representing a segment of your target learners, including their goals, prior knowledge, learning preferences, constraints, and frustrations. Learner personas are built through interviews, surveys, and observation rather than assumptions. In onboarding design, personas help ensure training addresses the real experience of new hires rather than what leadership imagines they need.
(LXD)
The practice of creating learning experiences that are human-centered, goal-oriented, and designed around how people think, feel, and behave as they learn. LXD draws from instructional design, user experience design, and behavioral science to build training that works because it respects the learner’s experience. This is the foundation of Modern Learning Lab’s approach.See also: Cognitive Load Theory; Universal Design for Learning (UDL)
(LXD)
Software that hosts, delivers, tracks, and manages learning content and learner progress. Common LMS platforms include LearnDash, Docebo, TalentLMS, and Cornerstone. An LMS is the infrastructure layer that makes self-paced and blended learning programs possible at scale, handling everything from course access and sequencing to quiz scores and completion certificates.
(LXD)
Clear, measurable statements that describe what learners will be able to do after completing a training experience. Well-written objectives use action verbs tied to observable outcomes (e.g., “identify,” “calculate,” “evaluate”) rather than vague terms like “understand” or “learn.” Objectives serve as the contract between designer and learner: this is what you will gain from this experience.See also: Backward Design; Bloom’s Taxonomy
(LXD)
Short, focused learning experiences typically lasting 3 to 10 minutes, designed to teach a single concept or skill. Microlearning works best for reinforcement, just-in-time support, and spaced practice rather than complex skill development. In onboarding, microlearning modules help new hires build confidence incrementally without the overwhelm of marathon training sessions.See also: Spaced Repetition
(LXD)
The Successive Approximation Model, an agile alternative to ADDIE that uses iterative design cycles. SAM emphasizes rapid prototyping and frequent stakeholder feedback rather than completing each phase before starting the next. This approach is well-suited to projects where requirements evolve quickly or where early learner testing reveals design assumptions that need correction.See also: ADDIE Model
(LXD)
Temporary support structures that help learners accomplish tasks they cannot yet do independently. Scaffolding is gradually removed as competence grows. Examples include worked examples, templates, checklists, guided practice, and coaching prompts. The key principle is that scaffolding should be intentionally designed to fade, building independence rather than dependence.See also: Cognitive Load Theory
(LXD)
A learning strategy that spaces practice and review at increasing intervals over time, leveraging how memory consolidation works. Instead of cramming content into one session, spaced repetition schedules review at optimal intervals to strengthen long-term retention. This technique is especially effective in onboarding programs where new hires must retain process knowledge weeks after initial training.See also: Microlearning
(LXD)
The process of visually mapping out a learning experience screen by screen or scene by scene before full development begins. Storyboards typically include content, narration scripts, visual direction, interactions, and navigation notes. They serve as the blueprint that aligns stakeholders, subject matter experts, and developers before production costs escalate.
(LXD)
Evaluation conducted at the end of a learning experience to measure whether learners achieved the defined objectives. Examples include final exams, certification tests, performance demonstrations, and scored simulations. Summative assessments determine readiness, making them critical gates in programs where competence must be verified before real-world application.See also: Formative Assessment
(LXD)
A program design model where experienced facilitators are trained to deliver content to others, enabling organizations to scale training without relying on a single instructor. Effective train-the-trainer programs include not just content knowledge but facilitation skills, classroom management, and the ability to adapt to learner needs in real time. The model depends on high-quality, standardized materials and clear facilitator guides.See also: Job Aid
(LXD)
A framework for designing learning experiences that provide multiple means of engagement, representation, and action and expression to accommodate the widest range of learners from the start. UDL shifts the question from “how do we accommodate this learner?” to “how do we design so accommodation is rarely needed?” In practice, UDL means offering content in multiple formats, providing flexible assessment options, and building choice into the learner experience.See also: Accessibility (WCAG)
How organizations capture, organize, share, and maintain collective knowledge to reduce risk and improve performance.
(KM)
Groups of people who share a common interest or profession and learn from each other through regular interaction. Unlike formal teams, communities of practice form around shared expertise and voluntary participation. They serve as living knowledge networks where tacit knowledge surfaces through conversation, mentoring, and collaborative problem-solving.See also: Tacit Knowledge
(KM)
The policies, roles, and processes that determine how knowledge assets are created, reviewed, updated, and retired. Without content governance, documentation drifts out of date, conflicting versions circulate, and no one is accountable for accuracy. A governance model assigns ownership, defines review cycles, and establishes standards for quality and consistency.See also: Documentation Lifecycle; Single Source of Truth
(KM)
A consolidated, practical guide designed to be used at the point of work, combining essential procedures, definitions, and troubleshooting steps in one accessible document. Unlike scattered help articles, a desk reference serves as a comprehensive companion for a specific role, product, or process. The format works especially well for complex products where users need quick, authoritative answers.See also: Job Aid; Single Source of Truth
(KM)
The stages a knowledge asset moves through from creation to retirement: drafting, review, publication, maintenance, and archival or deletion. Managing this lifecycle intentionally prevents outdated documentation from misleading users and ensures resources remain trustworthy. Each stage should have a defined owner and timeline.See also: Content Governance
(KM)
Knowledge that has been captured in a tangible form such as documents, procedures, databases, or training materials. Explicit knowledge can be easily shared, stored, and transferred between people. The challenge is not capturing it but keeping it current, organized, and accessible to the people who need it most. See also: Tacit Knowledge.
(KM)
The structural design of how information is organized, labeled, and connected within a system so that users can find what they need. In knowledge management, information architecture determines how articles are categorized, how search functions, and how users navigate between related topics. Poor information architecture is one of the primary reasons knowledge bases go unused. See also: Taxonomy.
(KM)
The accumulated knowledge, experience, and context that an organization holds collectively through its people, processes, and documentation. When key employees leave without transferring their knowledge, institutional memory erodes, forcing teams to relearn lessons, repeat mistakes, and lose the contextual understanding that drives good decision-making. See also: Tribal Knowledge; Knowledge Transfer.
(KM)
A systematic assessment of what knowledge exists within an organization, where it lives, who holds it, and where gaps or risks exist. A knowledge audit typically involves interviews, document review, and process mapping. The output informs priorities for knowledge capture, training development, and system improvements. See also: Knowledge Silo; Tribal Knowledge.
(KM)
A centralized, searchable collection of articles, guides, FAQs, and reference materials designed to help users find answers independently. Modern knowledge bases increasingly incorporate AI-powered search and recommendation to surface relevant content faster. An effective knowledge base reduces support tickets, accelerates onboarding, and preserves organizational knowledge beyond individual contributors. See also: AI-Powered Knowledge Base.
(KM)
The discipline of capturing, organizing, sharing, and maintaining an organization’s collective knowledge to improve performance, reduce redundancy, and preserve expertise. Knowledge management spans technology (tools and platforms), process (governance and workflows), and culture (willingness to share and document). When done well, it transforms scattered tribal knowledge into a reliable, scalable asset. See also: Explicit Knowledge; Tacit Knowledge.
(KM)
A condition where valuable knowledge is isolated within a specific team, department, or individual and is not accessible to the broader organization. Silos form naturally as teams specialize, but they create risk: duplicated effort, inconsistent information, slower decision-making, and vulnerability when key people leave. Breaking silos requires both structural changes (shared platforms, cross-functional documentation) and cultural shifts (rewarding knowledge sharing over knowledge hoarding). See also: Tribal Knowledge; Communities of Practice.
(KM)
The process of moving knowledge from one person, team, or system to another. Effective knowledge transfer goes beyond handing over documents. It involves structured conversations, shadowing, mentoring, and guided practice that help the recipient understand not just what to do but why decisions were made and what context matters. In onboarding, knowledge transfer is the bridge between what the organization knows and what the new hire needs to learn. See also: Train-the-Trainer.
(KM)
A step-by-step operational guide for completing a specific process, typically used in technical and operational environments. Runbooks document the exact sequence of actions, decision points, and escalation paths needed to handle routine tasks or respond to incidents. They are designed for execution, not learning, making them distinct from training materials but complementary to onboarding. See also: Standard Operating Procedure (SOP).
(KM)
A principle and practice where one authoritative location holds the definitive, current version of a piece of information. When multiple versions of the same document or data exist across different platforms, teams waste time reconciling conflicts and risk acting on outdated information. Establishing a single source of truth is a foundational step in knowledge management that reduces confusion and builds organizational confidence in shared resources. See also: Content Governance.
(KM)
A documented set of instructions for completing a recurring task or process consistently. SOPs define what needs to happen, in what order, and to what standard. They serve as the baseline for training, quality assurance, and compliance. The most effective SOPs are concise, visually clear, and written from the perspective of the person performing the work. See also: Runbook.
(KM)
Knowledge gained through personal experience that is difficult to articulate, document, or transfer in written form. Tacit knowledge includes intuition, judgment, pattern recognition, and the contextual understanding that experienced professionals apply instinctively. Because it resists easy capture, tacit knowledge is often the most valuable and most vulnerable type of organizational knowledge. Surfacing it requires intentional interview techniques, observation, and storytelling. See also: Explicit Knowledge; Tribal Knowledge.
(KM)
A hierarchical classification system that organizes knowledge into categories, subcategories, and labels. In knowledge management, taxonomy determines how content is tagged, sorted, and discovered. A well-designed taxonomy reflects how users think about and search for information, not how the organization is structured internally. See also: Information Architecture.
(KM)
Critical operational knowledge that exists only in the minds of specific individuals and has never been documented or formalized. Tribal knowledge is a risk factor: when those individuals leave, go on vacation, or get promoted, the knowledge leaves with them. Identifying and capturing tribal knowledge is one of the highest-impact activities in any knowledge management initiative. See also: Tacit Knowledge; Knowledge Audit.
The concepts, tools, and workflows transforming how people learn, create content, and manage knowledge with artificial intelligence.
(AI)
An AI-driven process where the model autonomously plans, executes, and iterates on multi-step tasks with minimal human direction between steps. Unlike simple prompt-response interactions, agentic workflows involve the AI making decisions about which tools to use, what information to gather, and how to sequence actions. This represents the leading edge of how AI integrates into knowledge work.See also: AI Agent; AI Workflow Automation
(AI)
An AI system designed to take autonomous actions toward a defined goal, often using tools, APIs, and external data sources to complete tasks. AI agents go beyond generating text by making decisions, executing steps, and adapting based on results. In learning and knowledge management, AI agents can automate content updates, route support questions, or manage data integrations.See also: Agentic Workflow
(AI)
The practice of using AI tools to draft, edit, restructure, or enhance written, visual, or multimedia content with human oversight and refinement. AI-assisted content creation accelerates production without replacing the judgment, context, and brand voice that human creators bring. In learning design, this means using AI to draft scripts, generate quiz questions, or produce first-pass instructional materials that are then reviewed and refined.See also: Human-in-the-Loop; Generative AI
(AI)
The use of AI to create, deliver, score, or analyze assessments in learning programs. This can include generating adaptive quiz questions based on learner performance, providing instant feedback on written responses, or analyzing assessment patterns to identify common knowledge gaps. AI-enhanced assessment makes formative feedback faster and more personalized while freeing facilitators to focus on coaching.See also: Formative Assessment
(AI)
The principles and practices that guide responsible use of AI in education and training contexts. Key concerns include data privacy (especially with learner data), algorithmic bias in assessment and recommendation systems, transparency about when AI is being used, and ensuring AI augments rather than replaces the human relationships that support learning. Organizations adopting AI tools for training should establish clear guidelines before deployment.See also: Human-in-the-Loop
(AI)
The ability to understand what AI is, how it works at a conceptual level, what it can and cannot do, and how to use AI tools effectively and responsibly. AI literacy does not require technical expertise in machine learning. It means knowing enough to evaluate AI outputs critically, write effective prompts, recognize AI limitations, and make informed decisions about when and where AI adds value to your work.See also: Prompt Engineering
(AI)
A knowledge management system that uses artificial intelligence to improve how information is searched, surfaced, and maintained. AI capabilities may include semantic search (understanding intent, not just keywords), automated content tagging, suggested related articles, and conversational interfaces that let users ask questions in natural language. AI-powered knowledge bases reduce the time users spend searching and increase the likelihood they find accurate, relevant answers.See also: Knowledge Base; Retrieval-Augmented Generation (RAG)
(AI)
An assessment of how prepared an organization is to adopt and benefit from AI tools and workflows. AI readiness spans several dimensions: data quality and availability, technical infrastructure, workforce skills and AI literacy, leadership support, and cultural openness to new ways of working. Organizations with low AI readiness often invest in tools before building the foundational capabilities needed to use them effectively.See also: AI Literacy
(AI)
The use of AI to handle repetitive, rule-based, or semi-structured tasks within a larger process, freeing humans to focus on higher-value work. Examples in learning and knowledge management include automated content formatting, intelligent document routing, scheduled knowledge base updates, and AI-generated first drafts for review. Effective AI workflow automation starts with mapping existing processes and identifying where AI reduces friction without introducing risk.See also: Agentic Workflow; AI Agent
(AI)
A broad field of computer science focused on building systems that can perform tasks typically requiring human intelligence, including understanding language, recognizing patterns, making decisions, and generating content. In learning and knowledge management, AI refers most often to tools that help create content, personalize learning paths, search information more effectively, and automate routine tasks. AI is a capability, not a product: it enables better tools rather than replacing the need for thoughtful design.
(AI)
A software application that simulates conversation with users, typically through text-based interfaces. In learning and support contexts, chatbots can answer frequently asked questions, guide users through processes, and triage requests before escalating to a human. Modern chatbots powered by large language models can handle more nuanced, open-ended questions than their rule-based predecessors.See also: AI-Powered Knowledge Base; Natural Language Processing (NLP)
(AI)
The maximum amount of text (measured in tokens) that an AI model can process in a single interaction, including both the input prompt and the generated response. Context window size determines how much information the model can consider at once. Larger context windows allow for longer documents, more conversation history, and richer context, but they also increase processing costs. Understanding context window limits is essential for designing effective AI workflows.See also: Tokenization
(AI)
A numerical representation of text, images, or other data that captures semantic meaning in a format AI systems can process. Embeddings place similar concepts close together in mathematical space, enabling AI to find related content, power semantic search, and identify patterns. In knowledge management, embeddings are the technology behind AI systems that understand what you mean rather than just matching keywords.See also: Retrieval-Augmented Generation (RAG); AI-Powered Knowledge Base
(AI)
The process of training a pre-existing AI model on additional, domain-specific data to improve its performance for particular tasks. Fine-tuning adapts a general-purpose model to understand specialized vocabulary, follow specific formatting conventions, or reflect organizational tone and standards. It sits between using a model as-is and building a custom model from scratch, offering a practical middle path for organizations with unique needs.See also: Training Data
(AI)
AI systems capable of creating new content, including text, images, audio, code, and video, based on patterns learned from training data. Generative AI tools like ChatGPT, Claude, and Midjourney have transformed content creation workflows by producing first drafts, variations, and ideas at unprecedented speed. The quality and usefulness of generative AI output depends heavily on the quality of the prompts and the human judgment applied during review.See also: Large Language Model (LLM); AI-Assisted Content Creation
(AI)
An instance where an AI model generates content that sounds plausible and confident but is factually incorrect, fabricated, or unsupported by its training data. Hallucinations occur because language models predict likely word sequences rather than verifying facts. Recognizing this tendency is critical for anyone using AI in learning or knowledge management, where accuracy is non-negotiable. Always verify AI-generated claims against authoritative sources.See also: Human-in-the-Loop
(AI)
A workflow design where human judgment is integrated at critical points in an AI-driven process, ensuring quality, accuracy, and accountability. Rather than fully automating a task, human-in-the-loop systems use AI to draft, suggest, or accelerate while a person reviews, approves, and refines the output. This approach is essential in learning and knowledge management, where incorrect information can undermine trust and cause real harm.See also: AI Ethics in Learning
(AI)
A type of AI model trained on massive text datasets to understand and generate human language. LLMs power tools like ChatGPT, Claude, and Gemini. They work by predicting the most likely next words in a sequence, giving them the ability to draft text, answer questions, summarize documents, and follow complex instructions. LLMs are powerful but not infallible, making human oversight essential for professional use.See also: Generative AI; Hallucination (AI)
(AI)
A subset of artificial intelligence where systems improve their performance on a task by learning from data rather than being explicitly programmed with rules. Machine learning powers recommendation engines, spam filters, predictive analytics, and many AI tools used in learning and business. Understanding machine learning at a conceptual level helps professionals evaluate which AI claims are credible and which are overstated.See also: Artificial Intelligence (AI); Training Data
(AI)
An emerging open standard for connecting AI models to external tools, data sources, and services in a structured, secure way. MCP allows AI systems to interact with applications like CRMs, databases, calendars, and file systems through a consistent interface. For knowledge management and learning teams, MCP represents a path toward AI assistants that can access and act on organizational data rather than operating in isolation.See also: AI Agent; Agentic Workflow
(AI)
AI systems that can process and generate content across multiple types of media, such as text, images, audio, and video, within a single interaction. Multimodal capabilities allow an AI to analyze a diagram, read a document, and respond with both text and generated visuals. In learning design, multimodal AI opens possibilities for creating richer, more accessible training experiences more efficiently.See also: Generative AI
(AI)
The branch of AI focused on enabling computers to understand, interpret, and generate human language. NLP powers search engines, translation tools, chatbots, sentiment analysis, and the conversational interfaces that make AI tools feel intuitive. In knowledge management, NLP is what allows a knowledge base to return relevant results even when the user’s query does not match the exact wording of an article.See also: Large Language Model (LLM); Chatbot
(AI)
The practice of crafting inputs (prompts) to AI models in ways that produce the most useful, accurate, and relevant outputs. Effective prompt engineering involves clear instructions, relevant context, examples of desired output, and specification of format, tone, and constraints. As AI tools become central to content creation and knowledge management, prompt engineering is becoming a core professional skill rather than a niche technical ability.See also: Prompt Library; Zero-Shot / Few-Shot Learning
(AI)
A curated collection of tested, reusable prompts organized by task type, use case, or workflow. Prompt libraries save time, ensure consistency, and help teams adopt AI tools faster by removing the trial-and-error of writing prompts from scratch. For organizations, a shared prompt library functions as a knowledge management asset, capturing the best practices for how the team interacts with AI.See also: Prompt Engineering
(AI)
An AI architecture that combines information retrieval with text generation. When a user asks a question, the system first searches a knowledge base or document collection for relevant content, then provides that content to a language model as context for generating a response. RAG reduces hallucinations by grounding AI responses in real, verified data. It is the technology behind most AI-powered knowledge base assistants.See also: AI-Powered Knowledge Base; Embedding; Hallucination (AI)
(AI)
Data generated by AI models rather than collected from real-world events or human behavior. In learning design, synthetic data can create realistic practice scenarios, test cases, and simulation inputs without exposing sensitive or private information. Synthetic data is also used to train and evaluate AI systems when real-world data is scarce, expensive, or raises privacy concerns.See also: Training Data
(AI)
A setting that controls the randomness and creativity of an AI model’s responses. Lower temperature values (e.g., 0.1-0.3) produce more predictable, focused, and consistent outputs, while higher values (e.g., 0.7-1.0) increase variety and creative risk. For professional tasks like writing SOPs or knowledge articles, lower temperatures are usually preferred. For brainstorming or creative drafts, higher temperatures can be more useful.See also: Prompt Engineering
(AI)
The process by which AI models break text into smaller units called tokens for processing. A token can be a word, part of a word, or a punctuation mark. Understanding tokenization is practical: it affects how much text fits in a context window, how API costs are calculated, and why AI models sometimes struggle with tasks like counting letters or processing very long documents.See also: Context Window; Large Language Model (LLM)
(AI)
The dataset used to teach a machine learning model how to perform its task. The quality, diversity, and size of training data directly impact a model’s capabilities and limitations. Biases in training data become biases in the model. For professionals using AI tools, understanding training data helps explain why a model excels in some areas and struggles in others, and why outputs should always be reviewed.See also: Machine Learning; Fine-Tuning
(AI)
Prompting techniques that influence how much guidance an AI model receives for a task. In zero-shot prompting, the model receives only instructions with no examples. In few-shot prompting, the model receives one or more examples of the desired input-output pattern. Few-shot learning typically improves output quality for structured or specialized tasks, making it a practical technique for anyone using AI tools professionally.See also: Prompt Engineering
