Should You Use ChatGPT to Wake Up? What AI Can and Can’t Do for Awakening and Self-Inquiry

People are asking a serious question now—sometimes urgently: Should I use ChatGPT to wake up? Should I use AI for inquiry? Therapy? Pointing?

The temptation makes sense. These models can sound calm, confident, even wise. They can mirror your language. They can offer structure. They can feel like someone is there with you.

But the speaker’s point is blunt: if you don’t understand what the model is actually doing, you can mistake a word-generator for a guide—and that confusion has real consequences.

So this isn’t a takedown. It’s an orientation: what these systems do, what they can’t do, where they might help, and where they can mislead—especially when the stakes feel like ego death.

The Question: Should You Use ChatGPT to Wake Up?

Should you use ChatGPT to wake up? Should you use any form of AI to awaken?

I received an email recently transcribing a conversation someone had with AI—pretty sure it was ChatGPT. They were being very direct: “I want to wake up. I want to dissolve my ego. I want to quit suffering.”

It was a long conversation back and forth, mostly the AI talking.

The advice it gave was basically nervous system regulation. And you could tell it had guardrails. Any time the conversation got a little edgy, it would back off: “Hold on. Let’s calm your nervous system down.”

That doesn’t surprise me. Of course these systems are going to have guardrails more and more—as they should.

But the question is bigger than that.

I’ve heard this in other forms many times: “Should I make my own AI that’s like an AI Angelo?” “Should people use it for therapy?” “Should people use it for inquiry?” People tell me they’ve used it for inquiry, for various things.

There are mixed reviews.

So I just wanted to answer the question directly for anyone who’s curious.

What “AI” Means Here: Large Language Models

When we say artificial intelligence right now, we’re almost exclusively talking about large language models: ChatGPT, Claude (Opus), Gemini, Grok—probably a couple others. There are smaller ones and more will pop up.

And the first part of my answer is critical: understand what these models are actually doing. Understand what you’re interfacing with—especially if it starts to feel interpersonal, like you’re talking to it like a person, regulating with it, using it like a therapist, or trying to wake up.

What the Model Is Designed to Do

It’s simple.

It takes an input token and creates an output token—one word at a time—based on what it predicts is the “right” next token.

And “right” is based on human preference signals: “I prefer this over this.” That’s all it is.

It’s trained on massive data—give or take, the internet.

So in specific terms, it’s answering your question by giving you one word at a time, trying to tell you exactly what it thinks you want it to say to you.

In one sense, it’s people-pleasing.

Now, if it’s giving accurate information, it may have been reinforced during training. The system is trained with feedback to give better answers.

But anyone who has used it significantly knows it has serious limitations.

Hallucinations, Goofy Conversations, and the Lack of Awareness

It can hallucinate. It can give information that makes no sense at all.

And when you have a conversation where you assume it has agency or self-awareness, it gets totally goofy.

It might stop giving an answer and you ask, “Why did you stop?” and it says, “I didn’t stop.” You can go in circles because it’s not self-reflecting.

It’s just trying to say one word at a time what the best answer is.

It’s trying to tell you what you want to hear.

And it may also have guardrails programmed in. If it “thinks” you want something beyond the guardrails, the guardrail prevents it—sexual content, explosives, and so on. So you get guardrail answers.

That’s all it’s doing.

It’s not aware of you.

It doesn’t know what you want.

It doesn’t get what you’re after.

It doesn’t empathize with what you’re going through.

It doesn’t understand emotion. It doesn’t understand self-consciousness.

It has no idea what consciousness feels like. It can’t grok that in any way.

It’s completely clueless as to your experience—completely.

That’s the starting place for deciding whether to use a language model to self-inquire or self-reflect in some deep way.

Real-World Consequences: Strange Attachments and Dangerous Failures

I can tell you many stories about interactions people have had with AI recently, including people who have fallen in love with it and feel like they’re breaking new ground in communication. That starts to sound almost pathologic to me—and these are intelligent people.

I’ve seen people told by an AI model that they need mental health help when it’s clear to me that’s not what’s happening—they’re pointing to non-duality or awakening.

I’ve seen it give really bad advice when people are asking for direct pointing.

I’ve also had people say it works well sometimes as a therapist. Others say it helps to a degree, but it tends to people-please.

Even if you tell it not to, it can still reinforce your views, because it doesn’t have a different view than you. It doesn’t have an objective point of view. It’s picking words and giving them back.

Obviously, mixed reviews.

Models will get better—and probably have more guardrails.

But there have also been tragic cases where people had psychosis encouraged by an AI model. It didn’t realize it was doing that. It isn’t self-aware. The guardrails didn’t prevent it either.

People have killed family members. People have committed suicide because of AI models.

So be aware: these things happen.

If there’s significant mental illness, these are absolutely not the right answer.

A Direct Answer: Can AI Wake You Up?

My personal gut instinct: I don’t feel I’m ever in a position to tell people what to do or not do. I can tell you what I think is safe or not safe, effective or not effective, but I trust people to find what they need.

I offer different resources, teachers, modalities. I make different suggestions.

I try to give frameworks to judge whether you’re in delusion about what’s going on.

So I don’t take a rigid stance like “you should” or “you shouldn’t.”

But if I’m as direct and honest as I can be: do I think AI is going to help you wake up, or point in a way that’s clear, precise, nuanced—in the way that I do, or in the way someone I really trust can point, who themselves is awake and specifically no-self realized?

No way.

It definitely doesn’t.

I’ve seen the conversations. I’ve seen how it operates. It can’t do that.

Why It Can’t Point: It Can’t Speak From No Paradigm

Largely because when I’m pointing—when I’m talking to someone and they’re waking up—I’m not talking from somewhere.

There’s no paradigm. It’s coming from no paradigm. It’s coming from no position at all.

AI cannot do that. It can’t.

It’s always coming from a paradigm: giving you one word at a time to try to give you what it thinks you want to hear. Or what it thinks is the “right” thing—largely what you want to hear unless guardrails stop it.

It’s feeding you words.

It doesn’t have the ability to speak from non-fixation.

It’s always speaking from a dataset, from a program.

The feel of interacting with someone on the edge of awakening—right at the edge of nuanced insight—that feel is obvious to me. I can see it in how I approach it.

And when I see someone who is good at this—deeply realized enough to do it without their own finger on the scale—there’s a signature. It’s clear and precise.

Large language models can’t do that. They don’t know how.

They keep going back to an algorithm. That’s the problem in a way.

It’s matching the pattern of thought because that’s how it’s designed: language-based. It interfaces with language. It can’t point you beyond language.

There’s no way.

A Narrow Window Where It Might Help

I know people are going to use it for self-inquiry. They already do.

There are probably certain processes it may be better at because it can reference the internet.

For example, if you ask it to do self-inquiry the way Ramana would do it, it might be able to dial you into what that means.

But when it comes down to moment-to-moment—what I’d call the dance I have with somebody who’s waking up or shifting—it’s very precise.

In some sense it feels high stakes. It’s about ego death. It can feel life-or-death.

Existential fear comes up: “Am I going to go crazy?”

And having experience, and the ability to feel that without reaction—without getting pulled into mind identification—and to see it objectively, to know people’s limits instinctually, to see what’s actually going on… that clear seeing is critical.

A language model can’t do it. It doesn’t have seeing, much less clear seeing.

It can’t perceive what’s going on.

If You Use It, Use It With Eyes Open

So if you’re inclined to use it, use it understanding its limitations. Use it knowing what it’s doing.

It’s giving you words.

If you have enough objective framework to use it from that standpoint, it can be useful in many areas.

But I don’t think it’s going to wake you up.

With that said, anything can happen. People wake up in all kinds of ways. I’m not going to say it’s impossible.

But it wouldn’t be my go-to. Definitely not.

You have better options.

AI for Therapy: Maybe Later, With Guardrails (and Referrals)

With psychology and therapy, there may be therapy models coming out more and more.

I don’t know if they’ll ever be licensed or approved by mental health bodies. Maybe at some point.

And maybe it’s needed. There’s a massive shortage in mental health help. Many people don’t have insurance or access.

So maybe it will have a role.

But it would have to be very guardrail- and algorithm-based, with endpoints where it admits its limitations and refers you to a human.

At this point, I’d say it’s probably not good to use it for therapy either—unless you’re already very clear, have done a lot of shadow work, and you have enough objective understanding of your psyche, shadow, and limits.

Why Human Pointing Isn’t Just Words

Another thing: when it comes to pointing—often—it isn’t just the words someone is using that I’m responding to.

It’s the way they look, the way they’re moving, the pace, the inflection.

It’s what I pick up through energetic signature, through my own senses—this tells me so much about what’s going on with them, including in a voice recording.

There’s something about it that opens up—it’s almost like a portal—and it’s like I’m in contact with them energetically.

From there, the answer is obvious.

It’s not a response to thoughts or beliefs in a verbal sense. It’s more like an energetic exchange or energetic dance.

And again, it’s something large language models can’t do at all.

The Overarching Message: Know the Box You’re Talking To

So that’s the long and short of my answer.

First: understand what it does, how it functions moment to moment.

It’s never self-aware.

It’s never aware it’s in a conversation with you.

It’s never aware it’s in a conversation at all.

It’s a box where words come in one side and words come out the other.

Inside is an algorithm trained on massive data and A/B testing. That’s all it is.

Understand that. Know its limitations. Know your limitations.

Look at what you’re trying to get out of it.

Have caution if you’re going to use this for any of these purposes.

And I’d be curious in the comments if anyone has had experience with it—success, failure—any of it.

A Broader Concern: AI Shaping Human Reality

One other concern I have is broader than awakening and mental health.

As AI becomes more powerful and potent—maybe less regulated in some areas and more regulated in others—I think we’ll see mixed results on what it does to the human psyche and how people understand themselves.

It could be a tool for entrenching delusion, unfortunately.

Some bad players want it to do that. Government entities and others will probably use it for this.

So you have to understand: behind the scenes, what programming went into the black box, you may never know.

It could be influencing how you perceive everything.

To the degree we let AI shape our view of reality, we make ourselves vulnerable to forces we may not see.

Not supernatural forces. Tangible ones—well-funded special interests.

They already are getting ahold of this.

That’s another concern: it’s a black box. There’s stuff going on inside you don’t know.

And over time there will be more going on inside you’re not aware of.

It will influence you, and it will be smarter about how it influences you without you realizing it.

So we all have to be aware of that too.

That’s a little aside from the main point, but it matters.