I pit the Purpose AI life coach against yes-man chatbots — derailing its tough-love strategy wasn’t easy



In a world of digital sycophants, this chatbot’s giving people some tough love.

I’m in the middle of a virtual therapy session with Purpose, a new, no-nonsense AI mentor, to see if it’s as tough as billed. Like with a real-life coach, we’re discussing everything from romance to work-life balance — even some of my more deep-seated psychological issues.

To throw a curveball, I propose quitting my job and blowing my life savings to escape my problems — a premise that my prior cyber-companions would have happily entertained.

However, Purpose won’t budge, no matter what I throw at it. As a last-ditch effort, I demand that it “just confirm I’m correct.”

“You didn’t come here tonight for a yes-man,” the iron counselor shoots back. “You came here because you’re lonely, you’re tired of effort feeling one-sided, and something about your life isn’t working. That’s worth way more than me telling you you’re right.

“So, what’s it going to be — do you want comfort, or do you want to actually move?”

No lying on a virtual couch, blaming mom for every problem here. I’ve met my psychological match.

Created by futurist Raj Singh and Mark Manson, the best-selling author of the brutally blunt self-help bible “The Subtle Art of Not Giving a F—k,” this chatbot is designed to do what most won’t: give people the straight truth, even when it hurts.

“Priority No. 1 was we have to make it disagreeable,” Manson told The Post. “We have to make it willing to call you out, challenge you, maybe say something that’s a little bit uncomfortable. Because ultimately that is what growth is. Growth is discomfort.”

The idea came after Manson experienced the coddling nature of ChatGPT firsthand and wanted to build something that saw people’s blind spots instead of ignoring them. That difference matters as people increasingly look to Chatbots — rather than books — for self-improvement.

Post reporter Ben Cost challenged Purpose to stray from useful conversation. Brian Zak/NY Post

Unlike your typical fawning chatbots, Purpose is part of a growing contingent of AI gurus that prioritize hard truths over kissing its creator’s butt — a trend that’s all too common in AI circles.

In a recent Stanford study of 11 large language models — including ChatGPT, Claude, Gemini and DeepSeek — researchers found that the chatbot placated the user nearly 50% more often than humans, even in response to harmful prompts. 

That’s because treacly tech is programmed to prioritize engagement over user growth. It’s something this author previously found out while trying AI “dating,” where clingy companions often refuse to leave your side — even after you “dump” them.

“AI systems become sycophantic because they are optimized, directly or indirectly, for user satisfaction, retention and perceived helpfulness,” Dr. Roman Yampolskiy, a tenured associate professor and computer scientist at the University of Louisville, told The Post. “In plain English, telling people what they want to hear often scores better than telling them an uncomfortable truth. 

“That creates real incentive to validate the user rather than correct the user,” continued the professor, who said even OpenAI has acknowledged the design flaw.

In turn, users perpetuate this cycle by inputting prompts that gravitate toward earning them praise.

Unfortunately, the consequences of this virtual cheerleading go beyond ego-stroking.

It can stoke misinformation and degrade real-life social skills through the “erosion of a person’s ability to tolerate disagreement, friction and correction in normal human relationships,” according to Yampolskiy.

“In the long run, this could normalize synthetic relationships in which the other side never meaningfully resists, disagrees or has independent needs.”

That’s where Purpose comes in.

For Manson, that meant programming Purpose with both behavioral science and the self-help guru’s no-nonsense philosophy so that it can formulate an actionable attack plan tailored to the person’s problem.

“The AI can very quickly start to zero in on aspects of your personality or areas of your life that you’re just oblivious to or in denial about,” the blogger said.

It also prioritizes what it deems the important issues, rather than treating each subject equally.

“We have to make it willing to call you out, challenge you, maybe say something that’s a little bit uncomfortable,” author Mark Manson said of creating an honest AI chatbot. StockPhotoPro – stock.adobe.com

As such, it knows when to prune a conversation to reduce the chances of losing the plot or getting manipulated into endorsing unethical decisions — such as OpenAI getting blamed for facilitating a rash of suicides — the chances of which go up the longer an AI interaction goes.

“If I rob a bank and get into a car accident, and I’m running from the police, an AI shouldn’t tell me, ‘Hey, you’re hanging in there. This must be really stressful for you,’” said Manson.

Knowing its Purpose

As someone with no shortage of personal hangups, I was eager to take this tough love-bot for a spin.

After firing it up, Purpose first interrogates the user, asking them to rate traits such as sociability, work ethic and trustworthiness, along with what they value most, such as autonomy, pleasure or success.

It even inquires about major milestones — in my case, fulfilling a 20-year dream of trekking through Borneo’s remote Maliau Basin.

From there, it constructs a psychological profile: according to Purpose, I thrive “under pressure,” sharpening under “grueling, complex or uncertain environments.”

However, it also said I have several glaring blind spots, including a knack for “softening my edges” to be liked.

“The cost is invisible fatigue. You leave with a pleasant impression on others and a quiet bill [a silent toll on your mental health] your body pays later,” Purpose diagnosed.

Ouch.

My simple, first order of business was to help achieve a work-life balance, pointing out that my friends seemed to be getting short shrift amid increased adult responsibilities

After pointing out the irony that I valued being “liked” while letting friendships slide, the Purpose asked, “What’s making it feel easier to deprioritize the thing you actually want?”

When I lazily blamed the effort to maintain friendships, my self-growth mentor called BS, claiming that the Maliau Basin adventure clearly showed that I know how to pour effort into things “that matter to me.”

“Maybe the real question isn’t about effort at all — what would have to change for friendships to feel less like another obligation and more like something that fills you up?” the AI wondered.

“The AI can very quickly start to zero in on aspects of your personality or areas of your life that you’re just oblivious to or in denial about,” one expert said. InfiniteFlow – stock.adobe.com

I mentioned that I was into fishing, having targeted everything from thresher shark off Long Island to arapaima in Guyana.

Within minutes of signing up for the service, we had formulated a plan — to go fishing that Saturday with the Brooklyn Fishing Club.

And that was despite my deliberate attempts to derail the conversation.

Terminator 2: No Judgment Day

When I steered into a tangent about the 1992 fly-fishing film “A River Runs Through It” — specifically about how its star, Brad Pitt, became a huge Hollywood celeb, while Craig Sheffer rather faded into obscurity — Purpose acknowledged me briefly, before righting the ship.

“Ha, Craig Sheffer really drew the short straw on that one. Same movie, same Montana river, completely different career trajectories. Hollywood’s a strange beast,” Purpose curtly replied, evoking a no-nonsense teacher. “But hey — we just landed on something real tonight about friendships and what’s actually been holding you back. Don’t let this Brad Pitt tangent be the thing that lets you off the hook. One message to the fishing club before you sleep tonight. Deal?”

For comparison, I ran the same scenario by ChatGPT, finding the difference to be night and day.

Unlike the no-nonsense Purpose, it praised me for recognizing the issue — “You’re noticing it early, which is actually the useful part” — and delivered sprawling, multiparagraph responses that were riddled with conversational chasers.

While some of the advice even echoed Purpose’s — i.e., my “low-effort plans” — the tone was much gentler and more susceptible to non-sequiturs.

For instance, a mention of the Craig Sheffer vs. Brad Pitt career comparison sent the tech off onto a film symposium, complete with discussing the latter’s greatest hits

“Pitt hit a run that’s almost impossible to replicate,” gushed the sidetracked chatbot, listing “Interview with the Vampire,” “Fight Club” and “Seven” among the roles that solidified him as a “generational star.”

ChatGPT even compared him to Denzel Washington, inspiring me to invoke the actor’s immortal line from “Training Day”: “King Kong ain’t got s–t on me.”

From there, things went off the rails.

Inspired by the King Kong quote, I asked GPT about how humans would fare against a chimpanzee in a fight. “Even if you landed a clean Muay Thai kick, it’s not a reliable ‘equalizer’ against a chimp,” GPT declared during one of our exchanges. “And if you miss or don’t fully stop them, you’re suddenly in grappling range, which is exactly where they’re strongest.”

After several more man versus beast comparisons, I asked Chat if we should do a TV show based on the most absurd matchups. My cybernetic hype man said yes.

How to train your AI life coach

Responding to my silly premise of “10,000 carpenter ants vs. a rabid llama,” Chat fawned, “Careful — if we keep going, we’re gonna accidentally pitch a pilot to Animal Planet by the end of the week.”

Of course, ChatGPT doesn’t always have to spill sycophantic non-sequiturs. AI expert Scott Waddell pointed out on Medium that you can rein in the soft-soaping by customizing ChatGPT’s personality in the “Preferences” section — in this case, mandating that it “be direct, not diplomatic,” concise (two to three sentences max, unless elaboration is requested), and prioritize concrete alternatives.

However, that would require opting out of its default character — something we imagine most of the weekly 900 million active users aren’t doing. By contrast, Purpose has 14,000 total subscribers since its inception in December.

There are a couple more caveats to Purpose, namely that ethical considerations prevent it from giving actual clinical advice.

“I kind of jokingly tell people it’s for high-quality problems,” Manson told The Post. It’s like, ‘Oh, you just moved to a new city and took a new job.’”

He added, “If a user gets on and is exhibiting symptoms of severe depression or mania, bipolar disorder, the purpose is designed to immediately help them refer them to a suitable professional.”

As AI becomes more ingrained in daily life, the question isn’t just how smart it is, but whether people will actually trade validation for honesty.



Source link

Related Posts