- RN Forward
- Posts
- 🗞️ Documented by AI, Judged by You (?)
🗞️ Documented by AI, Judged by You (?)
High hallucination rates in LLMs yet 90% AI accuracy in home health (what do i believe?), SNF automation without nurses (for now), and why student ethics don’t equal AI readiness.

Team Huddle
Everyday, an new article or study is released that either praises AI’s accuracy or tears it down. 2 of the News Nurses Need to Know articles this edition do just that! Times like these mean its more important than ever to put your critical thinking skills to the test. Ask yourself:
Is this source trustworthy? Biased?
Whats the study design? Is it designed well?
Is the data statistically significant?

Never take a study for face-value and for the love of Florence Nightingale, don’t read just the title! 😩
News Nurses Need to Know
Nurse Informaticist Helen Lu covers UGM25 In depth + Cerner announces AI tools
Details: Last newsletter, I gave you the initial rundown of Epic’s UGM25 announcements about 3 days out of the gate. Since then, more information has come out, and it does pertain to us Nurses (yay!). Ambient scribes and AI wound assessments were just the beginning. Epic intends to roll out a mobile workstation casting experience to seamlessly go from phone to work desk charting, AI quality metric extractions without manual abstractions, intelligent patient assignments, natural language chart queries, AI task clusterings on brain, virtual nursing support, and more!
On the flipside, Oracle’s Cerner also announce AI Tools with similar product offerings. Embedded clinical AI agent, automated documentation and coding, ambient listening, prior auth automation, prescription and lab management, care gap identification, and more. From what I can gauge, not many features for bedside nurses to interact with yet. The features will be available for ambulatory providers and are pending release after regulatory approvals. Plans for acute care GTM is estimated for 2026.Why it matters: Building for clinicians who have the ability to bill for their work is a no brainer for Founders but its clear that nursing-specific problems are slowly but surely also being addressed.
LLM are highly vulnerable to adversarial hallucination attacks during clinical decision support.
Details: A large-scale study published in Nature Communications Medicine tested 6 LLMs with 300 physician-validated clinical scenarios, each containing one fabricated detail (fake lab tests, invented syndromes, or fictitious physical signs). The research team found hallucination rates ranging from 50-82% across models when LLMs confidently elaborated on fabricated information rather than expressing uncertainty.
GPT-4o performed best but still hallucinated 53% of the time under default settings. Distilled-DeepSeek performed worst at over 80%. When researchers applied mitigation prompts designed to reduce hallucinations, overall rates dropped from 66% to 44% - still essentially a coin flip. Temperature adjustments (setting to zero for more deterministic outputs) offered no significant improvement.
Researcher Rik Renard's analysis highlights that Abridge recently published a whitepaper claiming their task-specific LLMs catch and fix 97% of hallucinations compared to GPT-4o's 82%. This builds on concerns that doctors might mindlessly accept AI outputs without critical thinking, potentially leading to worse patient care rather than better.
Why it matters: "Hallucination" in this context means when you give AI false information, it doesn't say "I don't know this" or "this seems wrong." Instead, it confidently makes up explanations and treats fake medical conditions as real (gaslighting much?).
The daily nursing workflow of dealing with incomplete information, time pressure, and complex patient conditions + an AI co-pilot confidently telling you about a nonexistent lab test or syndrome is leads to a risk of Nurses no longer using our clinical reasoning and start trusting whatever the computer says. That's how medical errors happen. Who takes liability for these errors is a BIG core issue still being played out in healthtech. 👨⚖️
We need healthcare companies that understand patient safety, not just tech companies trying to sell AI solutions. The difference between Abridge's 97% accuracy and generic AI's 50-80% failure rate shows why nursing-specific, rigorously tested tools matter. Before your hospital adopts any AI system, hallucination testing data should be apart of their AI Governance framework. If they can't show you how they prevent fake medical information, advocate for this safety measure.
Journal of Nursing Scholarships: AI achieves 90% accuracy in extracting health problems from nurse-patient conversations
Details: Researchers at Columbia University's Data Science Institute supported by the National Institute on Aging, American Nurses Foundation, and VNS Health tested an AI system using large language models with retrieval-augmented generation (RAG) to automatically identify health problems from nurse-patient conversations in home healthcare. They analyzed 5,118 utterances from 22 home healthcare visits to map verbal discussions to the standardized Omaha System terminology.
The optimal AI configuration achieved 90% accuracy using GPT-4o-mini with specific parameter settings and few-shot learning with chain-of-thought prompting. The system successfully identified and categorized health problems across the Omaha System's four domains (environmental, psychosocial, physiological, and health-related behaviors) and mapped them to 42 problem categories and 377 signs/symptoms. This technology captures over 70% of patient problems that typically go undocumented during home visits, potentially improving risk prediction for hospitalizations and emergency department visits.
Why it matters: We know one of the biggest gaps in nursing documentation is that most health problems discussed during patient encounters never make it into official records. In home healthcare particularly, nurses often identify complex patient needs through conversation that traditional documentation methods miss entirely.
The 90% accuracy rate is significant because it demonstrates AI can match expert-level performance in clinical problem identification, but the real value lies in operational efficiency. If nurses can automatically capture and standardize the problems they're already discussing with patients, it reduces documentation time while improving care continuity and risk assessment accuracy. I’d also argue that this gives us the ability to quantify how nurses identifying problems early for intervention leads to economic savings. This moves nursing forward in showing that we are economic drivers (i’m always thinking about nursing becoming a billable service! 🤑).
However, successful clinical implementation will require more than technical accuracy. Nurses still need training on interpreting AI-generated suggestions and maintaining clinical judgment about when contextual factors override automated classifications. The move from research to practice will reveal whether this automation truly reduces documentation burden or simply creates another system to manage alongside existing workflows. Speaking on nurses needing training and education …
Study shows Nursing students show strong AI ethics understanding but lack formal training
Details: A cross-sectional study of 119 freshman BSN students at a large Midwestern university examined their understanding of ethical vs. unethical generative AI (GAI) use in nursing education. The research was conducted by Tracy M. Dodson, Kimberley Thompson-Hairston, and Janet M. Reed (all appear to be nursing educators/researchers based on their credentials). Students demonstrated 93% accuracy in distinguishing between ethical and unethical AI scenarios across six test cases. The most commonly misclassified scenario involved a student submitting AI-generated work with only minor edits as their own original work - only 88% correctly identified this as unethical.
ChatGPT dominated student usage at 93%, followed by Gemini (15%), Claude and Perplexity (2% each). Students primarily used AI for studying concepts, writing support, assignments/homework, and research assistance. However, gaps emerged in training: 85% reported their high schools offered no AI literacy programs, 47% said their schools blocked AI on devices, and 77% expressed strong interest in university-provided AI learning modules. 6 students openly admitted to unethical AI use for academic pressure relief, while 13 used it when overwhelmed or time-constrained, and 10 used it to check answers before submission.
Why it matters: This research exposes a fundamental disconnect in nursing education. While students show surprisingly strong ethical intuition about AI use, they're entering nursing programs essentially untrained in a technology that will define their professional futures. The fact that 77% want formal AI literacy training signals they recognize this gap themselves.
What's concerning is that nursing students are already using AI extensively but learning through trial and error rather than structured education. The 12% who couldn't identify submitting lightly-edited AI work as plagiarism represents a genuine risk to academic integrity. More troubling is that students are turning to AI when overwhelmed rather than developing the critical thinking skills nursing demands (*cough cough to my opening team huddle message*). If we're producing nurses who default to AI shortcuts during stressful moments in school, what happens when they face life-or-death decisions at the bedside? The transition from K-12 to nursing school represents a critical window to establish ethical AI practices, but we're missing it. Nursing education needs to get ahead of this curve STAT.
Flax Health plans to automate SNF administration work
Details: Flax, an AI startup targeting skilled nursing facilities (SNFs), secured $3.5 million in pre-seed funding co-led by Sorenson Capital and Pear VC. The company offers three AI-powered modules: 1) Admissions Intelligence (summarizes patient data from referral portals and flags risks), 2) Intake Automation (pre-populates forms and enhances MDS accuracy), and 3) Claims Support (builds clinically-backed claims and appeals).
Founded by Trent Hazy and David Kartchner (neither are nurses), Flax claims to help SNF teams navigate complex reimbursement and compliance environments without adding administrative burden. Early customers report saving 11 hours per week per staff member, with some facilities reducing admissions team headcount and adding over $70,000 to their bottom line. The platform also helps improve Case Mix Index (CMI) through better documentation, generating thousands in additional weekly revenue for facilities.
The founders position their solution as rethinking clinical data structure rather than just layering AI onto legacy systems. They're specifically targeting the intersection of rising costs, staffing shortages, and complex regulatory requirements that SNFs face.
Why it matters: This funding signals growing investor recognition that post-acute care settings need tailored solutions, but my spidey senses are tingling. Flax claims to improve "care" and optimize workflows that heavily involve nursing assessments, documentation, and care coordination, yet there's no indication of nursing leadership in their founding team or C-suite.
Arguably, they’re early stage but if I don’t see any nursing leadership brought into CSuite or Advisory roles, there will be words. Of which, Drive Health, as been privy to when they marketed themselves as AI Nurses but didn’t have any Nurses on their team (but they had dentists and even a senator lol.) They’ve since hired a CNO and created a clinical advisory council. Hopefully they’re not just figureheads and they’re being adequately compensated for their domain expertise.The "11 hours saved per week" sounds impressive, but saved for whom? If this translates to reducing admissions staff while nurses still struggle with the same MDS documentation burden, we're optimizing the wrong workflows. As they scale, Flax better rapidly partner with experienced SNF nurse leaders or risk building another administrative band-aid that misses the real clinical workflow pain points. We've seen this play out before - tech bros "solving" nursing problems without nursing input rarely ends well for actual patient care.
Funding Announcements
💸 = Hiring potential. Follow these companies closely to see Nurse-qualified positions posted. Remember: Just because some positions don’t say “Nurse”, doesn’t mean you aren’t qualified!
NewDays, a cognitive health platform, raised a seed round of $7M.
Cancer care platform Daymark Health raised their series A of $20M after raising their seed earlier in April this year.
Harbor Health, a multi-specialty clinic group, raised $130M to expand their employer health insurance plans and launch individual and family insurance plans on the ACA.
Risk monitoring platform for healthcare, Alignmt AI, raised a $6.5M seed round.
A $30M Series A was raised by Predoc to scale health information.
Meroka raised $6M Seed round to support independent practitioners.
Ketryx, an AI platform for life sciences companies, raised a $39M Series B.
Hello Patient, an AI platform helping providers manage their end-to-end patient conversations via voice, text, chat and more, secured $22.5M for their Series A.
In-home family care provider, Nest Health, raised $12.5 series A to see Medicaid patients.
Cascala Health, an AI platform for post-acute care, raised $8.6M.
Abby Care, a platform that helps caregivers get certified by state programs to get paid for their caregiving, raised $35M series E.
EliseAI, conversational AI platform that automates routine communication, task management, and patient/resident interactions in housing and healthcare industries, raised $250M series E.
Flax Health, a skilled nursing automation platform, raised $3.5M pre-seed round.
Other Notable Reads and Podcasts
WATCH
🎧 Offcall’s Prompting Techniques for Clinicians (With Live Demos!) webinar
A 👏🏻 Nurse 👏🏻 did 👏🏻 THAT!

Michael Wang has held many titles in this lifetime. Green Baret, Nurse, MBA student, and now Co-Founder and CCO of Inspiren. Inspiren offers the market’s first end-to-end AI-driven ecosystem that integrates care planning, resident safety, emergency response, and staff optimization into one platform. Earlier this year, Inspiren raised a $35M Series A from Tier 1 VCs to achieve their mission. Read more about Michael’s journey.
Forward Nursing: Innovation Opportunities 🏃🏻♀️
Round-up of grants and career development opportunities i’ve come across that can help nurse innovators like YOU! (Not sponsored or affiliated with RN Forward.)
Northwestern Medicine x Techstars Healthcare Accelerator
TBH: 6 health tech startups can get the opportunity to join this accelerator’s 2nd inaugural class. Benefits are the typical mentorship, access to NW Medicine clinical and leadership resources, $220k in investment (CEA & SAFE with MFN). Just be mindful you do give up equity so make sure the numbers make sense for you!
Where: Chicago, IL
Due Date: November 19, 2025
Apply here
John Hopkins University x BCBS x Techstars Accelerator
TBH: You can use your same application for NW Medicine x Techstars here too but I would deff go through and make sure its agnostic for submission or you tweaked each app to be customized to each health system. (Would be ick for the reviewing to see Northwestern Medicine as the reading audience instead of John Hopkins no?)
Where: Baltimore, MD
Due Date: November 19, 2025
Apply here
Women in VC webinar with The Seraf Compass
TBH: Learn how women are impacting venture capital in this webinar! They’ll talk navigating challenges and overcoming barriers in VC (lots of sexism to overcome y’all), investing in women-led startups, building a career in VC, and what the future of women in VC looks like.
When: Sept 25, 2025 @7am PSTRegister here
E-Team Program apart of the VentureWell Accelerator is giving up to $25K of non-dilutive funding!
TBH: Get $25,000 in grant funding, entrepreneurship training, mentorship by dedicated staff, national recognition, and networking with peers and industry experts through this program.
Due Date: Sept 30, 2025
Apply here
Free IP legal advice from UPenn Law
TBH: Get free law advice from students under the close supervision of their licensed attorneys on transactional patents, copyrights, trademarks, trade secrets, and privacy accounts for your ventures in science, tech, business, and/or arts. Great opportunity as we all know, lawyers don’t come cheap.
Applications for new clients are now being accepted for Fall 2025
Apply here
Reply