Hoag Health - AI Transparency
At Hoag, we are committed to using technology thoughtfully and responsibly to support safe, high‑quality, and compassionate care. This page explains how Hoag uses artificial intelligence (AI), including generative AI, in certain digital health services and how we protect patient trust, privacy, and clinical judgment.
How Hoag Uses Artificial Intelligence
Hoag uses AI‑enabled tools to support—not replace—clinical care. These tools are designed to help organize information, improve efficiency, and enhance patient experiences.
AI may be used to assist with:
Patient intake and symptom collection
Summarizing patient‑provided information for clinical review
Care coordination and visit routing
Drafting documentation, summaries, and after‑visit materials
Transcription and summarization of telehealth visits
AI systems do not independently diagnose medical conditions or determine treatment. All care decisions are made by licensed health care professionals.
Human Oversight and Clinical Responsibility
All clinical decisions at Hoag are made by licensed providers.
AI‑generated outputs are used only as clinical support tools. Clinicians review relevant information, apply professional judgment, and determine the appropriate diagnosis, treatment plan, and level of care.
Use of AI Systems
Hoag designs, codes, and operates AI‑enabled services for use by patients and the public, including Ask Hoag, using third‑party AI technologies within secure, HIPAA‑compliant environments.
Hoag’s activities include:
Defining the role, scope, and purpose of AI within specific clinical and administrative workflows
Writing instructions that guide how AI responds within those workflows
Implementing safety controls and response guardrails
Integrating approved reference and contextual data to support real‑time AI interactions
Monitoring performance to ensure safety, accuracy, and appropriateness
Model‑Level Training and System Configuration
Hoag does not perform model‑level training, retraining, or parameter fine‑tuning of generative AI models, meaning Hoag does not alter the underlying model architecture, weights, or learning algorithms.
Hoag does, however, design and operate AI‑enabled services by configuring how generative AI capabilities are used within specific clinical and administrative workflows. This includes writing system instructions and prompts, implementing response guardrails, establishing workflow logic, and incorporating approved reference and contextual data to guide AI outputs for defined use cases. These activities shape how AI functions within Hoag‑operated services and constitute part of Hoag’s development of those services, but they do not involve retraining or modifying the underlying AI models themselves.
Patient information is not used to retrain, update, or fine‑tune the underlying AI models.
Safety Controls and Guardrails
Hoag implements multiple layers of safeguards to ensure AI is used safely and appropriately, including:
Purpose‑limited instructions designed specifically for health care use cases
Guardrails that restrict AI outputs to supported administrative and clinical support functions
Controls that escalate interactions to licensed clinicians when clinical judgment is required
Ongoing monitoring and review to identify and address potential risks
Training Data and Datasets
(California AB 2013 Disclosure)
The AI systems used by Hoag rely on models and components developed using large datasets. The following disclosures are provided in accordance with California Civil Code § 3111.
Whether Datasets Were Purchased or Licensed
The datasets used to train the generative AI systems underlying Hoag’s AI‑enabled services include datasets that were purchased and/or licensed by the AI system providers, as well as publicly available data and data created by human trainers. Hoag itself does not purchase or license datasets for the purpose of training generative AI models.
Data Sources and Origins
Training datasets used by AI system developers may include:
Publicly available data
Licensed data sources
Data created by human reviewers or trainers
Hoag does not provide patient medical records or identifiable patient data for the purpose of training AI models.
Purpose of the Datasets
Training datasets are used to:
Enable general language understanding
Support summarization and contextual responses
Improve system safety and reliability
Dataset Size
Training datasets typically consist of large volumes of data, often including millions to billions of data points.
Types of Data
Depending on the dataset, data may include:
Textual information
Structured and unstructured language data
Artificially generated (synthetic) data
Copyrighted and Licensed Content
Datasets may include licensed, copyrighted, or public‑domain content. Use of such content is governed by applicable legal and licensing requirements.
Personal and Sensitive Information
AI systems used by Hoag are not trained using identifiable patient medical information
Training datasets used by AI system developers may include
aggregate consumer information
, meaning information that relates to groups or categories of consumers and is not reasonably linkable to any individual or household
Aggregate consumer information is distinct from identifiable or de‑identified individual records
During care delivery, any patient data processed by AI is handled in accordance with
HIPAA and applicable privacy laws
Patient data is used only for treatment, health care operations, and quality improvement as permitted by law
Aggregate Consumer Information
Training datasets used by AI system developers may include aggregate consumer information, as defined under California law, which relates to groups or categories of consumers and is not reasonably linkable to any individual or household.
Data Processing and Modification
Before being used for training, datasets may undergo processing steps such as cleaning, filtering, de‑identification where appropriate, and formatting to improve accuracy, safety, and reliability.
Time Period of Data Collection
(§ 3111(a)(10))
Training datasets used to develop the generative AI models underlying Hoag’s AI‑enabled services were collected over multiple years prior to Hoag’s deployment of those services. Based on publicly available information and information provided by AI system developers, dataset collection spans several years and may be ongoing as developers continue to update and refine their systems. Hoag does not control or independently verify the specific dates of data collection.
Dates Datasets Were First Used
(§ 3111(a)(11))
Training datasets were first used during the initial development of the underlying generative AI models by their developers, prior to Hoag’s use of those models within its services. Based on publicly available information, such use began several years before Hoag’s deployment of AI‑enabled services. Hoag does not determine or control the timing of dataset use during model development.
Synthetic Data
Some AI systems use synthetic data to test system behavior, improve performance, and reduce reliance on real‑world sensitive information. Synthetic data does not represent real patients.
Transparency, Choice, and Patient Support
Hoag provides notice when generative AI is used in patient‑facing digital experiences. Patients may request human support or in‑person care when appropriate. For emergencies, call 911 or go to the nearest emergency department.
Governance and Ongoing Review
Hoag regularly reviews its AI‑enabled services to ensure alignment with patient safety, clinical quality standards, privacy requirements, and evolving legal and regulatory guidance.
Learn More
If you have questions about Hoag’s use of artificial intelligence, please speak with your care team or contact Hoag Health.
Stay up-to-date on the latest news from Hoag
By submitting this request, you agree to receive communications from Hoag and accept our Privacy Policy and Terms of Use.