top of page

HOW TO TELL IF AN INFLUENCER IS AI—AND WHY YOU SHOULD CARE

  • Writer: Melissa Fleur Afshar
    Melissa Fleur Afshar
  • Feb 23
  • 4 min read

Newsweek Exclusive Feature


AI influencers never age, never slip and rake in followers, but experts warn their flawless fantasy comes at a real cost.


You may have seen Emily Rae on Instagram.


She has long, perfectly blow‑dried hair, glowing skin and, like countless other influencers, posts images spotlighting Rhode products, ski chalets and Nobu backdrops. But unlike those influencers, Emily Rae is not real. She is just one of a growing number of AI‑generated content creators—made and operated by developers—whose rapid rise online has experts warning the content hack could cause serious harm.


AI content creators are increasingly occupying the same digital territory once dominated exclusively by humans. Instagram feeds are now populated by people, places and products that are entirely computerized. Like human creators, AI influencers span categories including fashion, lifestyle, wellness and even spirituality. One of the most prominent examples is Yan Mun (@yangmunus), an AI‑generated Buddhist monk with more than 2.5 million Instagram followers, who in eerily realistic clips often appears among lush greenery or temples while sharing wellness advice.


Some AI influencers, through their creators, present as lifestyle or fashion model‑types, echoing the now famous faces who have grown vast followings and lucrative brand partnerships online. But unlike human creators, AI personas do not age, tire, need breaks or experience fluctuations in appearance or mood.


That difference, experts say, allows them to set a standard no human can realistically match—and this has negative implications on those who consume such content.


Warnings of Danger


Emily Rae’s popularity illustrates how convincingly those standards are constructed. Despite being labeled "AI" in both handle and bio, @aiemilyrae has amassed more than 68,000 followers, some of whom leave adoring comments under her posts. Her AI-generated images are impeccably styled and relentlessly current, with contemporary clothing brands and trends regularly featuring.


Kristi Boyd, senior trustworthy AI specialist at SAS, told Newsweek the danger of AI influencers lies in how they are designed to be encountered, despite many like Emily Rae disclosing they are artificial.


"Still, this information is often buried in a bio that requires extra clicks,” she said.

AI influencer Emily Rae and another AI content creator at an AI-generated ski resort (L) and Rae at an AI-generated bar. Credit: @AIEMILYRAE / HIGGSFIELDAI
AI influencer Emily Rae and another AI content creator at an AI-generated ski resort (L) and Rae at an AI-generated bar. Credit: @AIEMILYRAE / HIGGSFIELDAI

Social media platforms, she added, are built for “endless scrolling, not for pausing to ask, ‘who is posting this?’ As a result, she worries that many may never meaningfully register that the ‘person’ they are following is not real. That matters, Boyd explained, because people are neurologically wired to trust those who look and behave like them.


Citing in-house research showing that “forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” she described a “significant vulnerability” baked into the system.


“We’re not just dealing with a design problem,” she said. “We’re dealing with a trust architecture that works against the user, and when the influencer is successful, we suspend our disbelief and begin trusting the story regardless of how artificial it is.”


For young people, Boyd warned, this is unfolding as offline social infrastructure continues to erode. In the U.S., safe and enjoyable third spaces such as community centers, malls and unstructured public environments are disappearing, pushing young people online as a result.


"[Online they] connect with each other but are also increasingly exposed to unrealistic or entirely artificial versions of life,” she said.


Stressing that she is not a psychiatrist, Boyd added that she finds it reasonable to suggest that repeated exposure to AI personas could distort our social reference points.


"It shapes what we believe ‘normal’ people, bodies and lives look like," she said.


Some worry the implications of the likes of Emily Rae could be more severe.


In 2025, Microsoft's AI CEO warned of a growing “psychosis risk" and urged society to grapple with technologies that could fundamentally change our sense of personhood and society. Mustafa Suleyman cautioned that people may begin believing in “the illusion of AIs as conscious entities" so strongly that they advocate for AI rights or citizenship, calling such a turn dangerous.


"We must build AI for people; not to be a digital person…AI companions are a completely new category, and we urgently need to start talking about the guardrails," he wrote.


Owen Scott Muir, a psychiatrist and chief medical officer at Radial, told Newsweek the central issue is scale.


“The problem is the endless ability to create AI influencers that say things humans couldn’t imagine saying, at a scale that might hook into the specific psychosis or risk for psychosis of any individual,” he said.


In psychosis, Muir explained, people may believe that neutral events carry special, personal meaning. An infinite supply of AI influencers, he warned, increases the likelihood that one could intersect with an individual’s delusions.


Muir pointed to the 1981 assassination attempt on Ronald Reagan, driven by a psychotic belief system involving actress Jodie Foster, as a cautionary example. He urged people to imagine how dangerous such systems could become if AI influencers appeared to validate or escalate delusional beliefs.


Other clinicians focus on the quieter, cumulative psychological effects of AI content creators.


Dr. Hugo de Waal, consultant psychiatrist at Berkeley Psychiatrists, told Newsweek that AI influencers represent a form of engineered perfection.


“They don’t age, don’t experience fatigue, don’t fluctuate in appearance or mood,” he said. “Over time, repeated exposure to that level of consistency can quietly recalibrate what people see as normal or desirable."


Even when audiences know an influencer is artificial, de Waal cautioned that emotional bonds can still form.


For human creators, the rise of AI influencers risks reshaping their day jobs.


Reese Chan (@relishwithreese), a real‑life content creator with more than 364,000 Instagram followers drew viral attention on February 9 after speaking about several AI accounts, including the AI monk.


“I believe AI can be incredibly powerful when used ethically and transparently,” Chan told Newsweek. “But issues arise when AI‑generated personas are presented as real people, particularly when they position themselves as authorities or ‘experts’ without credentials, lived experience, or accountability.”


While she believes human creators can weather the storm of Emily Raes, she said they now contend with personas that can be optimized infinitely.


Newsweek reached out to Higgsfield AI for comment and spoke with Meta via email.


THANK YOU FOR READING


COVER IMAGE CREDIT: HIGGSFIELD AI / @AIEMILYRAE


Comments


bottom of page