Skip to main content

Glossary

Quick definitions you can share with your team. For how-to context, see our guides or insights.

Large language model (LLM)

Software trained on huge amounts of text so it can guess the next word in a sentence. That is why it can write emails, summarize docs, or answer questions—it is pattern matching, not “thinking.”

In buying conversations, “which LLM?” matters less than workflow fit, data boundaries, and how you will review output.

Strength: speed on drafts and first passes. Weakness: it can sound sure when it is wrong—so you still need owners and checks.

Prompt

The instruction or question you give the tool. Clear prompts beat clever ones: say the audience, the format, and what to avoid.

Good prompts often include role (“you are helping a support agent”), constraints (“under 200 words”), and examples (“here are two on-brand replies”).

Save vetted prompts your team reuses; treat them like code that needs version control.

Hallucination

When the model makes up facts, citations, or details. Always check anything that touches money, health, or legal matters.

Mitigations: retrieval over trusted docs, human review, and forbidding the model from inventing sources.

Train staff to expect this failure mode—it reduces blind trust on day one.

Fine-tuning

Extra training on your own examples so the model fits your tone or task better. Useful, but not a substitute for good data and review.

Fine-tuning does not fix broken workflows or missing policies. It tunes style and task fit on top of a base model.

Budget for maintenance: as your products and messaging change, examples need refresh.

RAG (retrieval-augmented generation)

The model looks up your documents first, then answers from that material. It reduces wild guesses when your knowledge base is accurate.

Quality depends on search, chunking, and how often documents are updated—bad docs mean confident wrong answers.

Common in internal support and policy Q&A where answers should trace back to a source page.

Agent

A setup where the software can take several steps on its own—search, call tools, or run code. Powerful, but needs guardrails so it does not wander.

Agents need explicit allowlists: which APIs, which data stores, and when to stop and ask a human.

Start with narrow tasks (e.g., “draft an email from this template”) before multi-step autonomy.

Inference

The moment the model produces an answer from your prompt. Vendors often charge per token (chunks of words) for inference.

Costs rise with long prompts, long answers, and high traffic—forecast usage from real tickets or docs, not demos.

Caching and smaller models can help when the task is repetitive.

Token

A small piece of text the model counts. Rough rule: 100 English words ≈ 75–100 tokens, depending on the tool.

Pricing and rate limits are often expressed in tokens; learn your vendor’s tokenizer basics so bills are not a surprise.

PII

Personally identifiable information—names, emails, IDs, anything that points to a real person. Treat it carefully in any cloud tool.

Policies should cover what can be pasted where, and what to do when someone makes a mistake.

Business tiers and DPAs exist because consumer products are not always appropriate for regulated data.

SSO (single sign-on)

One company login (often through Google or Microsoft) that controls who can access a vendor. Standard for serious business tools.

SSO plus role mapping reduces orphaned accounts when people leave.

Audit log

A record of who did what and when. If you cannot get logs, you cannot investigate mistakes or misuse.

Ask retention length and export format before you need them in a hurry.

For customer-facing AI, logs help prove what the system said when disputes arise.

Human in the loop

A person checks or approves the output before it ships. Common in support, compliance-heavy work, and anything customer-facing.

Design queues so review is fast—otherwise staff will route around the tool.

Measure reviewer time; if humans become the bottleneck, fix prompts, retrieval, or scope before buying more seats.

Need help applying this to your workflows? Contact Eccordia.