A stridently anti-Israel Harvard graduate student used artificial intelligence to churn out at least five of the more than 90 medical journal articles he published in two and a half years, including one about whether newborns have a future in Gaza, a Washington Free Beacon review shows.
“The ongoing Israeli military assault on Gaza has led to an alarming humanitarian catastrophe, whereby the onset of famine is coupled with a deterioration of maternal health services, severely impacting the wellbeing of pregnant women and of children. The near-total collapse of the health-care infrastructure, coupled with the lack of access to essential medical services, has resulted in a tragic surge in preventable maternal and neonatal deaths,” said the article titled “Will there be a future for newborns in Gaza?” in the November 2, 2024, issue of the Lancet, a British medical journal. “The world cannot remain silent any longer. The time for action is now—to restore access to health care, to protect women and children, and to uphold the sanctity of life.”
The lead author of the article, Bilal Irfan, published a Harvard Medical School email address for correspondence related to the article, and he listed his affiliation as “Center for Bioethics, Harvard Medical School, Boston, MA 02115, USA.” At the time the article was published, he was a graduate student at Harvard Medical School. The author listed second on the article, Abdallah Abu Shammala, listed his affiliation as Gaza’s European Hospital. The Israel Defense Forces has posted video of Hamas terrorist tunnels under that hospital, and in May 2025 the IDF said it had killed “the terrorist Mohammed Sinwar, Head of the Hamas terror organization’s military wing,” along with two other terrorists, “in an underground command and control center, under the European Hospital in Khan Yunis.”
Three online screening programs—Pangram, Winston AI, and ZeroGPT—marked the article as 100 percent AI-generated. Another program, Quillbot, said 54 percent of the text of the article is AI-generated. A fifth program, GPTZero, gave the article a 73 percent likelihood of being AI-generated.
For a person in his early 20s, Irfan has an astonishingly long list of scholarly journal articles, many of them co-authored with physicians in Gaza hospitals that Israeli and American officials say and physical, photographic, and video evidence show were used for operations of Hamas terrorists.
The U.S government’s official PubMed database, hosted by the National Institutes of Health’s National Library of Medicine, lists Irfan as an author on 91 articles published in the two and a half years between March 30, 2026, and November 30, 2023. In the year 2025, according to PubMed, Irfan published 57 journal articles; that’s a rate of more than one a week.
If it sounds more prolific than humanly possible, that appears to be for a reason. The Free Beacon put the texts of those articles through five different programs designed to detect the use of artificial intelligence. Several of the papers were flagged by multiple scanning programs as having a 100 percent likelihood of being AI-generated.
The five papers that were flagged most consistently as AI-generated included “The Digital Lifeline: Telemedicine And Artificial Intelligence Synergy As A Catalyst For Healthcare Equity In Pakistan,” published in Feb 2024 in Cureus; “Beyond The Scope: Advancing Otolaryngology With Artificial Intelligence Integration,” also published in Feb 2024 in Cureus; “Between Fajr And Isha: Understanding Sleep Dynamics In Islamic Prayer Timings And Astronomical Considerations,” published in April 2024 in Cureus; “Sleep Health Ambassadors In Greater Detroit: A Model For Religio-Culturally Conscious Care In Places Of Worship From Dearborn To Hamtramck,” published in Cureus in May 2024, and the “Will There Be A Future For Newborns In Gaza?” article published in the Lancet in November 2024.
One screener, Pangram, marked 21 of the papers with 100 percent AI likelihood. Another screener, GPTZero, marked 14 of them with 100 percent likelihood. Winston AI marked 13 of them as 100 percent AI generated. Other screeners were less sensitive, and there are reports of false-positives with some screening programs.
Artificial intelligence has the potential to boost scholarly productivity and output. However, it also poses potential risks about integrity, quality, and accuracy. The Alfred and Rebecca Lin Professor of Computer Science at Harvard, Ariel Procaccia, recently took to the New York Times to denounce Chinese universities that “churn out papers at a ferocious pace, but the quality of these publications is too often in question,” cautioning about “rushed, shoddy or outright fraudulent research.”
The quality of the Irfan articles does appear questionable. They have the even tone characteristic of AI. The one on telemedicine in Pakistan uses left-wing jargon and buzzwords about stakeholders and equity and recommends government subsidies. “This situation calls for targeted government intervention to subsidize or provide the necessary technology to underserved populations, ensuring equitable access to telemedical services,” the article says. “In a country where healthcare delivery faces numerous obstacles, telemedicine offers a viable solution to extend healthcare access, reduce costs, and improve patient outcomes. For telemedicine to achieve its full potential, however, a concerted effort is needed from all stakeholders.”
The one on “sleep health ambassadors” reports, “The potential effectiveness of this model lies in its community-centric approach. By involving local religious leaders, imams, and community stakeholders in the planning and implementation phases, the programs ensure greater relevance and acceptance. … Nevertheless, the success of such culturally integrated health programs heavily depends on the extent of community buy-in and the continuous engagement of local leaders. … Recognizing these limitations not only provides a more balanced view but also sets realistic expectations for stakeholders, enhancing the editorial’s credibility and robustness.”
The one on AI in otolaryngology, ironically, dwells on the potential of large language models. “The integration of large language models (LLMs) into medical education offers a promising avenue to enrich learning experiences in otolaryngology. … Furthermore, LLMs could assist in the curation and summarization of the latest research findings, ensuring that otolaryngologists stay abreast of the rapidly evolving evidence base. … The integration of AI into otolaryngology presents an exhilarating yet challenging frontier.”
The one on sleep dynamics in Islam concludes, “As the global Muslim population navigates the challenges of modern living, including the demands of maintaining prayer times, healthcare providers, community leaders, and technology developers must collaboratively ensure that sleep, a fundamental pillar of health, is not compromised.”
Irfan was the featured speaker at a March 22, 2026, event of Harvard’s FXB Center for Health & Human Rights and of a boycott-Israel advocacy group. He was introduced as “a bioethicist who conducts research at Harvard’s Brigham and Women’s Hospital” and who has “published over 100 peer-reviewed articles in medical journals including the Lancet.” At that event, he urged people to engage in “advocacy” and repeatedly accused the “apartheid regime” of Israel of “genocide” in Gaza.
He has identified himself in medical journal articles for the past two years, including one in the Lancet Global Health this month, as affiliated with the “Center for Bioethics, Harvard Medical School, Boston, MA, USA.”
Harvard Medical School now says he departed nearly a year ago, when he finished a masters program, and that he has been inappropriately using his affiliation with the school for the past year.
“Bilal Irfan is a former student in HMS’ one-year Master of Science in Bioethics program from August 2024 until May 2025 when he graduated. Of the papers you referenced, none of the work was done as part of the master’s program, therefore the HMS affiliation should not have been used. Beyond being an alumnus of the master’s program, Irfan has no past or current affiliation or academic appointment at HMS,” a Harvard official said.
The director of the Harvard Medical School Center for Bioethics, Rebecca Weintraub Brendel, did not respond to two emails seeking comment. Dr. Brendel is a former president of the American Psychiatric Association. The center lists the total cost of attendance for its one-year master’s program at $100,304.
Some of the dozens of papers, including the April 2026 article in the Lancet Global Health, identify Irfan as an affiliate of the Center for Surgery and Public Health at Brigham & Women’s Hospital in Boston. Irfan is not listed on the center’s website as faculty, staff, research fellow, or research fellow alumni. The center’s director and administrative director did not respond to inquiries about him. The press office of Mass General Brigham, into which the Harvard-affiliated teaching hospital has merged, said in response to a query about the five papers, “We have no indication that any of this work was done at our hospital system.”
Irfan did not respond to queries at the Harvard and University of Michigan email addresses that were published for correspondence with the journal articles. The University of Michigan also did not respond to a query about Irfan.
The Lancet referred me to its editorial policies on “Use of generative AI and AI-assisted technologies.” That policy states “The Lancet Group supports the appropriate, transparent, and responsible use of generative AI and AI-assisted technologies by authors in their research, and in the preparation of their manuscripts. Upon submission, authors will be asked to declare the use of AI tools. Declaring the use of AI tools supports transparency and trust between authors, readers, reviewers, editors, and contributors.” The journal did not reply to my follow-up asking, “In the case of the article I am asking about, was the use of AI tools declared? If the point is transparency and trust with readers, as a reader, where is the trust here if the tools were used but not transparently declared to readers?”
Cureus, a journal where Irfan published repeatedly, did not respond to my queries about Irfan’s apparent use of AI. Cureus was delisted from one journal index in October 2025 amid concerns about article quality, Retraction Watch reported in an article whose headline described the journal as “embattled.”
A 2025 senior survey by the Harvard Crimson found “30 percent of respondents said they had turned in AI-generated work as their own,” and “The share of students who reported cheating in an academic context at Harvard was 30 percent.” Harvard Medical School’s generative AI guidelines state that “Responsible AI use at HMS includes ethical, transparent practices,” advising users to “uphold academic integrity.” Harvard’s integrity is at the center of a multi-billion-dollar clash between the institution and the federal government, which last month sued Harvard, alleging that the school obtained billions in federal research grant funding while falsely certifying that it was in compliance with federal rules forbidding discrimination against Jews.
Irfan is the lead author of a January 2026 article in the the International Journal of Health Planning and Management titled “The Political Determination of Gaza’s Health System Destruction and Reconstruction and the Limitations of International Medical Deployments.” Two of the other authors of that are listed as being affiliated with Al‐Shifa Hospital in Gaza. Al Shifa was the subject of a January 2024 New York Times article headlined “Hamas Used Gaza Hospital as a Command Center, U.S. Intelligence Says.” The Times article reported, “A senior U.S. intelligence official said on Tuesday that the American government continued to believe that Hamas used the hospital complex and sites beneath it to exercise command and control activities, store weapons and hold ‘at least a few hostages.’” Irfan’s article faults Israel, reporting, “The Al‐Shifa Medical Complex, Gaza’s largest tertiary hospital, was extensively damaged after a protracted siege and successive raids by the Israeli military that killed patients and staff, drawing only perfunctory international outcry.” The article text makes no mention of Hamas.
Read the full article here








