Skip to content

Foundation Models

Entity Type: Glossary ID: foundation-models

Definition: Large-scale AI models trained on broad, diverse datasets that serve as a foundation for a wide range of downstream applications and tasks. These models, typically based on transformer architectures, are pre-trained using self-supervised learning on vast amounts of text, images, or multimodal data. Foundation models can be adapted through fine-tuning, prompting, or other techniques to perform specific tasks, making them highly versatile and cost-effective for developing AI applications.

Related Terms: - pre-training - fine-tuning - transfer-learning - large-language-models - self-supervised-learning

Source Urls: - https://arxiv.org/abs/2108.07258 - https://crfm.stanford.edu/assets/report.pdf - https://hai.stanford.edu/sites/default/files/2021-08/HAI_Foundation_Models_Report.pdf

Tags: - pre-training - transfer-learning - large-models - foundation

Status: active

Version: 1.0.0

Created At: 2025-09-10

Last Updated: 2025-09-10