Pharmacopsychiatry
DOI: 10.1055/a-2577-7214
Review

Patient and Physician Exposure to Artificial Intelligence Hype

Scott Monteith
1   Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, Michigan, USA
,
Tasha Glenn
2   ChronoRecord Association, Fullerton, California, USA
,
John R. Geddes
3   Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
,
Peter C. Whybrow
4   Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angel es, California, USA
,
Eric D. Achtyes
5   Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, Michigan, USA
,
Rita Bauer
6   Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
,
Michael Bauer
6   Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
› Author Affiliations
 

Abstract

Both patients and physicians are routinely exposed to the corporate promotion of artificial intelligence (AI) for healthcare products. Hype for AI products may impact both patient behavior and attitudes about healthcare. Corporate AI hype may intentionally overlook the known limitations associated with AI products and focus solely on potential benefits. As AI is increasingly integrated into medicine, physicians are also routinely subject to AI hype. As the promotion and use of AI products have grown dramatically in recent years, physicians should be aware of the potential benefits and risks of AI products despite the hype.


#

Introduction

Exposure to corporate marketing of artificial intelligence (AI) has become a routine part of daily life for both patients and physicians. There has been a huge growth in spending on AI development and marketing. For example, worldwide spending on generative AI solutions is expected to double in 2024 from 19.4 billion in 2023 and reach $151.1 billion in 2027 [1]. In the US, the adoption of generative AI was faster than the adoption of the personal computer or the Internet, with 39% of the US population aged 18–64 using generative AI in August 2024 [2].

For emerging science and technology products that have commercial potential, hype often simplifies and sensationalizes, focusing on the benefits and understating the risks [3] [4]. With AI marketing driving such large expenditures on AI, the marketing of AI can often be described as hype. Hype can directly affect the valuation of a company [5]. For example, OpenAI was valued in the billions in 2024 without ever having turned a profit [5]. Companies are using celebrities, who often have financial interests at stake, to endorse AI tools [6] [7] [8]. The hype for technology products often includes sophisticated videos that promise more than is actually delivered [5]. The hype is often repetitive, since people tend to believe things that are repeated frequently, including falsehoods, because familiarity is not easily distinguished from truth [9]. Hype often exaggerates the capabilities of AI products, distorting expectations [10]. Physicians and their patients need to be aware that they are routinely exposed to AI hype from corporate promotional activities that include advertising, marketing, and public relations [3].

Background on artificial intelligence

The quest for AI is an extremely complex process that has developed over decades, with some promising results, although general intelligence remains outside the capabilities of our programmed computers [11]. AI includes a set of various engineering techniques. Much of the focus today is on generative AI, which uses large amounts of data to make predictions about things humans would do in a similar context, for example, to predict what word a human would add at the end of a particular sequence of words. Large language models (LLMs) are the basis of the best-known generative AI products, such as OpenAI’s GPT4, Microsoft’s Bing, and Meta’s LLaMA [5].

LLMs are not anchored in facts and cannot distinguish between fact and fiction [5]. Although an LLM may create responses that are coherent and grammatically correct, the LLM does not understand the text [12]. For example, an LLM should not be trusted to provide financial advice since they may contain arithmetic errors and do not have the common sense to recognize answers that are obviously wrong [12]. Online misinformation, including images, are frequently generated by AI [13].

Humans are actively involved in the creation of an AI system, including algorithm development, training, testing, deployment, commercialization, updating, and re-training [14]. However, companies may characterize their AI products as “superhuman” or “operating without human knowledge” even when the AI systems were entirely developed by humans [15]. Many companies hide human involvement in an AI system which allows people to think that AI products are working better than they actually do [16]. For example, at least 10,000 workers in the Philippines were involved with the US company Scale AI, which collects data for large American technology companies, including Meta, Microsoft, and generative AI companies like Open AI [17].


#

Rise of artificial intelligence hype

Businesses often exaggerate the capabilities of AI products in marketing materials, which often originates in research and development environments [15]. After the release of ChatGPT, the hype of AI became so pervasive that AI widely penetrated the public consciousness [18]. Organizations often label their products as AI to attract attention, funding, and talent, some even presenting AI as having near-magical intelligence [19]. However, non-specialists should beware of overenthusiastic marketing claims. To successfully take advantage of the potential of AI, it is important to understand the spectrum of AI capabilities and appropriate uses for AI and recognize inflated claims [19] [20].


#

Artificial intelligence hype in healthcare targeting both patients and physicians

Patients are subjected to hype of AI capabilities in healthcare, including both utopian visions of AI as the magic cure, and dystopian fears that AI will lead to deskilling and collapse of the healthcare system [21]. In healthcare, press releases for the general public related to medical research may contain exaggerated claims and soon-to-come practical achievements of AI [11]. The hype in healthcare seen by the general public includes statements like “AI may be as effective as medical specialists at diagnosing disease” [22]. There are also increasing advertisements to patients from organizations that use AI as part of the patient care process [23] [24]. Another concern with scientific hype in the popular press is that it becomes ubiquitous without reflection on the accuracy of the claims [25]. Additionally, journalists may not have the technical background to understand the limitations and challenges of AI and to accurately simplify the technology for a general audience [26].

Physicians are also subject to AI hype as AI is integrated throughout medicine [27] [28] [29]. As of fall 2024, the US Food and Drug Administration had approved 692 AI-based medical devices, including 531 in radiology, 71 in cardiology, and 20 in neurology [30]. Despite the hype, AI has well-defined pitfalls that are of particular concern in medicine. For example, LLMs are subject to accuracy issues, hallucinations, glitches, data biases, data quality problems, unpredictable outputs, privacy, and ethical concerns [27] [28]. LLM are not capable of formal, logical reasoning [31] [32]. There are many known AI challenges related to the data, including quality, quantity, representativeness, and completeness [28] [33] [34]. An AI-powered transcription tool, Whisper, invented text and sentences in hospital transcriptions [35]. Another potential pitfall is that AI may work on a test dataset but not perform well when implemented in the clinical production environment [36] [37] [38]. Additionally, the costs of implementing and ongoing support of an AI system may be much higher than assumed [16].

In articles discussing AI, the known limitations are de-emphasized, omitted, or addressed using framing language such as “skeptics say” [39]. One major concern is the lack of understanding of the limitations of AI, especially when deployed in high-risk situations such as healthcare. Another danger of AI hype in healthcare is that it will distract scientists from the real issues, including a focus on technical details and the selection of appropriate products and services [16]. In orthopedic research, AI hype has resulted in a review article for every two original reports [40].


#

Limitations

This article is focused on AI hype and does not discuss the hype of non-AI products. The potential and varied benefits of AI products being hyped in healthcare, such as reducing administrative costs, improving outcomes, and minimizing inequalities, are not discussed [41] [42]. For example, AI tools may assist with patient education, improving the readability of patient educational materials [43] [44] [45]. Technical details related to the algorithms used to develop the AI products and issues related to regulation and cybersecurity were omitted. Methods for auditing AI products in healthcare and legal liabilities for medical errors related to the use of AI products are omitted in this article [46] [47]. The huge energy and water requirements at AI data centers have not been discussed. Additionally, potential measures to deal with AI hype and aggressive marketing of AI products are not included.


#
#

Conclusion

Physicians should be aware that they and their patients are routinely exposed to corporate promotional hype of AI products. Effort is required to eliminate the effects of hype on patient expectations and physician treatments. Further research into the impacts of AI hype on society is needed.


#

Funding Sources

No funding was received.


#
#

Conflict of Interest

The authors declare that they have no conflict of interest.

Acknowledgement

Author Contributions: SM and TG wrote the initial draft. All authors reviewed and approved the final manuscript.


Correspondence

Scott Monteith, MD
Michigan State University College of Human Medicine
Traverse City Campus, 1400 Medical Campus Drive, Traverse City
49684 MI
USA   

Publication History

Received: 06 January 2025

Accepted after revision: 10 March 2025

Article published online:
12 May 2025

© 2025. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany