Palo Alto, California, United States
Physician applying clinical reasoning to healthcare AI safety and bias detection. Core Focus: → Clinical AI Auditing — reviewing healthcare algorithms for bias, fairness issues, and safety gaps using frameworks from Obermeyer 2019, Wynants 2020 → Literature Review & Synthesis — systematic analysis of healthcare AI deployment studies, building evidence-based audit checklists → Knowledge Architecture — 2,000+ node NotebookLM knowledge graph mapping clinical decision pathways to AI failure modes → Framework Development — building practical audit toolkits for evaluating AI systems in clinical settings What I actually do: • Review healthcare AI papers and translate findings into actionable safety frameworks • Analyze algorithmic bias patterns using publicly available datasets and published methodologies • Build structured knowledge bases connecting clinical reasoning to AI evaluation criteria • Write about healthcare AI safety with clinical perspective — every claim backed by a paper Tech Stack: JavaScript, Python, Playwright, SQL, NotebookLM Knowledge Areas: FDA SaMD guidance, HIPAA compliance basics, NIST AI RMF, Obermeyer methodology Based in Palo Alto — bringing a physician's perspective to AI safety conversations.
Physician applying clinical reasoning to healthcare AI safety and bias detection. Core Focus: → Clinical AI Auditing — reviewing healthcare algorithms for bias, fairness issues, and safety gaps using frameworks from Obermeyer 2019, Wynants 2020 → Literature Review & Synthesis — systematic analysis of healthcare AI deployment studies, building evidence-based audit checklists → Knowledge Architecture — 2,000+ node NotebookLM knowledge graph mapping clinical decision pathways to AI failure modes → Framework Development — building practical audit toolkits for evaluating AI systems in clinical settings What I actually do: • Review healthcare AI papers and translate findings into actionable safety frameworks • Analyze algorithmic bias patterns using publicly available datasets and published methodologies • Build structured knowledge bases connecting clinical reasoning to AI evaluation criteria • Write about healthcare AI safety with clinical perspective — every claim backed by a paper Tech Stack: JavaScript, Python, Playwright, SQL, NotebookLM Knowledge Areas: FDA SaMD guidance, HIPAA compliance basics, NIST AI RMF, Obermeyer methodology Based in Palo Alto — bringing a physician's perspective to AI safety conversations.
Physician applying clinical reasoning to healthcare AI safety and bias detection. Core Focus: → Clinical AI Auditing — reviewing healthcare algorithms for bias, fairness issues, and safety gaps using frameworks from Obermeyer 2019, Wynants 2020 → Literature Review & Synthesis — systematic analysis of healthcare AI deployment studies, building evidence-based audit checklists → Knowledge Architecture — 2,000+ node NotebookLM knowledge graph mapping clinical decision pathways to AI failure modes → Framework Development — building practical audit toolkits for evaluating AI systems in clinical settings What I actually do: • Review healthcare AI papers and translate findings into actionable safety frameworks • Analyze algorithmic bias patterns using publicly available datasets and published methodologies • Build structured knowledge bases connecting clinical reasoning to AI evaluation criteria • Write about healthcare AI safety with clinical perspective — every claim backed by a paper Tech Stack: JavaScript, Python, Playwright, SQL, NotebookLM Knowledge Areas: FDA SaMD guidance, HIPAA compliance basics, NIST AI RMF, Obermeyer methodology Based in Palo Alto — bringing a physician's perspective to AI safety conversations.
Physician applying clinical reasoning to healthcare AI safety and bias detection. Core Focus: → Clinical AI Auditing — reviewing healthcare algorithms for bias, fairness issues, and safety gaps using frameworks from Obermeyer 2019, Wynants 2020 → Literature Review & Synthesis — systematic analysis of healthcare AI deployment studies, building evidence-based audit checklists → Knowledge Architecture — 2,000+ node NotebookLM knowledge graph mapping clinical decision pathways to AI failure modes → Framework Development — building practical audit toolkits for evaluating AI systems in clinical settings What I actually do: • Review healthcare AI papers and translate findings into actionable safety frameworks • Analyze algorithmic bias patterns using publicly available datasets and published methodologies • Build structured knowledge bases connecting clinical reasoning to AI evaluation criteria • Write about healthcare AI safety with clinical perspective — every claim backed by a paper Tech Stack: JavaScript, Python, Playwright, SQL, NotebookLM Knowledge Areas: FDA SaMD guidance, HIPAA compliance basics, NIST AI RMF, Obermeyer methodology Based in Palo Alto — bringing a physician's perspective to AI safety conversations.
Independent research phase that laid the foundation for Palo Alto Health Intelligence's core methodology. Research Areas: → LLM Structural Analysis for Clinical Contexts — Systematic evaluation of GPT-3.5/4 and open-source models (LLaMA, Mistral) for clinical reasoning tasks. Documented failure modes: confidence collapse on rare diagnoses, hallucination patterns in drug interaction queries, and demographic bias in triage recommendations → Clinical Knowledge Architecture (NotebookLM) — Built a 2,000+ node knowledge graph mapping clinical decision pathways to computational equivalents. Each node links diagnostic reasoning patterns to evaluable AI system behaviors. This graph became the structural backbone of the Clinical AI Audit Framework → Healthcare Data Pipeline Research — Analyzed FHIR/HL7 integration patterns across 15+ EHR vendor architectures. Identified the 4-6 handoff layers between model inference and clinical decision points that became the central thesis of the consulting practice → Clinical Pattern Recognition → AI Evaluation Translation — Mapped how physicians detect silent deterioration in patients to how engineers should detect silent degradation in deployed ML models. Published methodology as internal framework Key Outputs: • Clinical AI Audit Framework v0.1 — the precursor to the current consulting deliverable • 2,000+ node NotebookLM knowledge graph (clinical pathways ↔ AI failure modes) • Systematic review of 50+ healthcare AI deployment case studies (including Obermeyer 2019 replication analysis) • FHIR integration pattern library for inference pipeline design This period answered one question: Can a physician's clinical pattern recognition be systematically translated into AI infrastructure evaluation methodology? The answer built a company.
Independent research phase that laid the foundation for Palo Alto Health Intelligence's core methodology. Research Areas: → LLM Structural Analysis for Clinical Contexts — Systematic evaluation of GPT-3.5/4 and open-source models (LLaMA, Mistral) for clinical reasoning tasks. Documented failure modes: confidence collapse on rare diagnoses, hallucination patterns in drug interaction queries, and demographic bias in triage recommendations → Clinical Knowledge Architecture (NotebookLM) — Built a 2,000+ node knowledge graph mapping clinical decision pathways to computational equivalents. Each node links diagnostic reasoning patterns to evaluable AI system behaviors. This graph became the structural backbone of the Clinical AI Audit Framework → Healthcare Data Pipeline Research — Analyzed FHIR/HL7 integration patterns across 15+ EHR vendor architectures. Identified the 4-6 handoff layers between model inference and clinical decision points that became the central thesis of the consulting practice → Clinical Pattern Recognition → AI Evaluation Translation — Mapped how physicians detect silent deterioration in patients to how engineers should detect silent degradation in deployed ML models. Published methodology as internal framework Key Outputs: • Clinical AI Audit Framework v0.1 — the precursor to the current consulting deliverable • 2,000+ node NotebookLM knowledge graph (clinical pathways ↔ AI failure modes) • Systematic review of 50+ healthcare AI deployment case studies (including Obermeyer 2019 replication analysis) • FHIR integration pattern library for inference pipeline design This period answered one question: Can a physician's clinical pattern recognition be systematically translated into AI infrastructure evaluation methodology? The answer built a company.
Independent research phase that laid the foundation for Palo Alto Health Intelligence's core methodology. Research Areas: → LLM Structural Analysis for Clinical Contexts — Systematic evaluation of GPT-3.5/4 and open-source models (LLaMA, Mistral) for clinical reasoning tasks. Documented failure modes: confidence collapse on rare diagnoses, hallucination patterns in drug interaction queries, and demographic bias in triage recommendations → Clinical Knowledge Architecture (NotebookLM) — Built a 2,000+ node knowledge graph mapping clinical decision pathways to computational equivalents. Each node links diagnostic reasoning patterns to evaluable AI system behaviors. This graph became the structural backbone of the Clinical AI Audit Framework → Healthcare Data Pipeline Research — Analyzed FHIR/HL7 integration patterns across 15+ EHR vendor architectures. Identified the 4-6 handoff layers between model inference and clinical decision points that became the central thesis of the consulting practice → Clinical Pattern Recognition → AI Evaluation Translation — Mapped how physicians detect silent deterioration in patients to how engineers should detect silent degradation in deployed ML models. Published methodology as internal framework Key Outputs: • Clinical AI Audit Framework v0.1 — the precursor to the current consulting deliverable • 2,000+ node NotebookLM knowledge graph (clinical pathways ↔ AI failure modes) • Systematic review of 50+ healthcare AI deployment case studies (including Obermeyer 2019 replication analysis) • FHIR integration pattern library for inference pipeline design This period answered one question: Can a physician's clinical pattern recognition be systematically translated into AI infrastructure evaluation methodology? The answer built a company.
Specialized physician focusing on the intersection of clinical medicine and artificial intelligence. Primary work involves auditing AI-generated medical content to ensure clinical accuracy, detect logical inconsistencies, and prevent medical "hallucinations" in machine learning outputs. Core Responsibilities: - Reviewing and validating medical AI outputs against evidence-based standards (FDA/NOM compliance) - Detecting clinical inconsistencies and logical fallacies in LLM-generated healthcare content - Collaborating with AI development teams to improve algorithm accuracy and clinical relevance - Conducting medical data analysis using Python (Pandas) for healthcare applications - Implementing RLHF (Reinforcement Learning from Human Feedback) frameworks in medical contexts - Providing clinical expertise for healthcare technology design and implementation - Medical coding consultation and validation (ICD-10, SNOMED) - Chain of Thought (CoT) prompting optimization for medical AI systems Key Projects & Achievements: - Developed systematic methodologies for identifying medical inaccuracies in AI-generated clinical content - Applied forensic logic analysis to complex medical-legal cases requiring technical validation - Created workflows integrating clinical expertise with data-driven decision making - Contributed to improving healthcare AI reliability through rigorous evidence-based auditing Approach: Combining traditional medical training with modern data analytics and AI literacy. Focused on ensuring that emerging healthcare technologies maintain clinical integrity while leveraging the benefits of machine learning and automation. Technical Skills