Emerging Technologies

Section 6 - Preparing for the Future: Strategic Steps for Integrating LLMs and Generative AI in Organizations


In the rapidly evolving landscape of healthcare, organizations are increasingly recognizing the need to integrate cutting-edge technologies such as Large Language Models (LLMs) and generative AI into their operations. To prepare for the increasing and evolving use of generative AI in the clinical setting, this article outlines essential elements ranging from governance and ethical considerations to staff preparation and training. The goal is to leverage AI as a powerful tool to augment patient care and alleviate clinician workload, ensuring that these technologies are harnessed as valuable assets rather than becoming goals in themselves (There is no value saying “Our team is using AI to…” unless there is a clear value provided). By providing a clear framework for the strategic adoption of AI, this guide aims to help healthcare organizations navigate the complexities of this digital transformation effectively and responsibly.   


Effective governance is crucial in overseeing the ethical, responsible, and effective use of LLMs and other AI in healthcare. It involves setting standards, principles, and procedures. Consider the following questions as you develop a governance plan at your organization for what will likely be a long list of AI solutions. 
Is this a good idea?  

There are innumerable ideas for how to use generative AI in healthcare.  Before diving into AI implementation though, healthcare organizations must critically assess whether a given application aligns with their values, goals, and priorities. What are the perceived benefits? Is the solution likely to meet its potential in this organization? What about my organization may make this solution more or less likely to succeed?  

Consider whether the proposed solution has been successfully implemented in similar settings and at a similar scale to yours. Many solutions may seem promising during a pilot phase, but only a few have proven to be reliable on a larger scale. Generative AI can create convincing prototypes and small pilots. Scaling a successful AI solution is still a complex process. Assess whether your organization is willing to take the risk or would benefit more from a proven solution, regardless of whether it uses AI. 

Does it align with our principles for responsible AI? 

Using AI responsibly may vary based on the use case. In Section 2 of this series, we discussed a set of LLM and generative AI use cases and ranked them by risk. Organizations should develop a set of guidelines in advance for acceptable uses of AI. For example: 

  • When is “Human in the Loop” mandatory? When can workflow be completely automated?  
  • What is the expectation for solutions to be monitored on an ongoing basis to look for changes in function over time? Continuous monitoring is key to ensuring AI tools remain aligned with healthcare goals and adapt to changing needs and technological advancements.  
  • What are acceptable types of training and testing of the model? Do you require training and testing on your own data?  
  • Are there safeguards in place for inaccuracies and hallucinations? 

For models generating prose, use consistent rubrics to evaluate aspects such as accuracy, bias, tone, and completeness. Solutions providing a clear mechanism for issue reporting, performance grading, and model correction post-implementation ensure ongoing success.  

Does it connect to our strategy?  

When deciding to implement any technology in healthcare, and particularly AI with its current popularity, consideration needs to be given about whether the solution aligns with your organization's strategy rather than detracting from it. While many AI applications are impressive, not all of them are immediately relevant to your specific goals. Leaders need to carefully assess AI tools, such as LLMs, to determine whether they can help improve outcomes related to the organization's key areas of focus. the potential value of the solution against any possible distractions it may cause. Consider where it is important for your organization to be an innovator/early adopter and where it makes more sense to wait to see which implementations prove the most successful.  Decisions should be strategic and well-considered, reflecting the organization's overarching mission and goals, rather than focused on the hype or promises of AI. 

Does it meet our expectations?  

Develop clear criteria for success prior to implementation to help ensure your tool is meeting your expectations. AI solutions, regardless of their purpose, should uniformly be integrated in the workflow, not add to clinician burden, and have a clearly defined outcome that meets your clinical and business need.  Generative AI is still in its early stages of integration in healthcare, and therefore consider including criteria to address potential unintended consequences such as bias, accessibility, and inaccuracy or hallucination, as these risks may impact the likelihood of success of any solution. 
Is there a return on investment?  

Traditional return on investment (ROI) metrics may not apply well to LLMS and other AI solutions in healthcare. While each AI solution is likely to demonstrate an ROI over the status quo, evaluating ROI against other factors such as alternative solutions, process improvements, operational enhancements, and traditional technology solutions may reveal that the AI ROI is not as significant as advertised. Looking at the ROI as measured against a non-optimized status quo may inappropriately favor the AI solution It is also important to consider ROI beyond financial returns since much of the work performed by AI solutions aims to reduce burden and improve efficiency – improvements that may not necessarily enable staff to do more work or allow the organization to work with fewer employees. Critical healthcare measures such as improved employee retention, reduced clinician burden, and enhanced clinical outcomes may add meaningful information to the ROI assessment. 


Develop an education plan for staff on the appropriate use of AI and consider incorporating it into regular yearly training. For example, if using a LLM to generate text, staff should understand risks such as hallucinations and bias. Training should cover not only the technical use of AI tools but also ethical considerations, data privacy, and how to interpret AI-generated information. 

Human in the Loop (HITL) Education:  

The concept of HITL is essential, especially in the case of generative AI. Clinicians need to be vigilant, recognizing the importance of review and editing to prevent biases and inaccuracies from going unnoticed. A careful assessment of the level of acceptable risk is crucial; this level may vary depending on the use case.  If a human is ultimately responsible for the content (as in many use cases), this should be clearly communicated in training and policy.  
Publicly Available AI:  

Clinicians should be educated on the risks associated with publicly available AI tools, e.g. ensuring HIPAA compliance and not exposing intellectual property / proprietary information. A formal strategy for AI compliance education, including the risks of using publicly available tools, should be considered.  

Prompt Etiquette: 

When clinicians are responsible for generating prompts for generative AI, they may benefit from additional curriculum addressing the basics of prompt engineering to improve the reliability of results. Though it can be challenging to generate prompts that generate trustworthy outcomes, clinicians can enhance their basic skills and achieve significant improvements in results. 
Ad-hoc prompts used for decision-making or information retrieval pose significant risks since there is no formal data science framework in place to ensure appropriate responses. Therefore, the appropriate use of ad-hoc prompts must be clearly communicated to clinicians, along with the potential risks involved.   


As healthcare organizations embark on this transformative journey, it is crucial to remember that LLMs and other forms of AI are tools in service to broader strategies. By adopting a thoughtful and strategic approach to governance, validation, and compliance education, organizations can pave the way for the responsible and effective use of LLMs and generative AI, ensuring they contribute to improved patient outcomes and enhance healthcare delivery.  

Other Sections

The views and opinions expressed in this content or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.