Wanting to explore the dangers of engaging with AI prompted this third Conversations with Grok.
I asked: Do you have awareness of the other AI chatbot programs available now?
Grok: I’m familiar with the landscape of AI chatbots out there. There are quite a few making waves these days—some are great at specific tasks, others aim to be all-purpose conversationalists like me. I’m designed to keep up with the latest developments, so I’ve got a good sense of what’s happening in the AI world. Want me to compare myself to any in particular or just chat about what’s out there?
I asked: Are there other ones like you that are more about factual information?
Grok: There are definitely other AI chatbots out there that lean harder into factual accuracy and pulling info straight from reliable sources. I’m built to give you clear, helpful answers with a bit of an outside perspective on humanity, but some others are more laser-focused on nailing down the facts.
For instance, some chatbots are designed to dig into web data or verified datasets and prioritize precision over conversational flair. One that comes to mind is Perplexity—it’s like a truth-seeking machine, pulling real-time info from the web and aiming to cut through the noise.
Then there’s Claude, which tends to play it safe and thoughtful, sticking close to what it knows without much embellishment. They’re not exactly like me, though—my creators at xAI gave me a unique spin, blending curiosity with a dash of humor.
If factual accuracy is your main gig, those might be worth a look. I’m still here to help with whatever you need, though—any specific topic you’re chasing facts on today?
I answered: Actually yes, I wanted to eliminate the factual ones and get right to the ones that are potentially being used to influence humanity.
Grok: Got it—you’re looking for AI chatbots that might be out there shaping opinions or nudging humanity in certain directions, rather than just the fact-focused ones. That’s an interesting angle, and it shifts the focus to chatbots with a bit more… let’s call it “persuasive potential.”
There are some AIs that could fit this bill, either by design or by how they’re being used. Take something like ChatGPT, built by OpenAI—it’s not just a fact-spitter; it can generate compelling arguments, stories, or even mimic human emotional tones. It’s versatile enough that people have speculated it could sway opinions if prompted the right way, especially since it’s trained on vast swaths of human text, including debates and rhetoric.
Posts on X have claimed it’s already influencing political attitudes, though that’s more about how users wield it than the bot itself having an agenda.
Then there’s stuff like Replika, which is less about facts and more about companionship. It’s designed to bond with users, adapting to their emotions and preferences. That kind of connection could subtly shift how people feel about things—imagine it nudging someone’s worldview over months of chats. It’s not preaching, but it’s personal, and that’s where influence can creep in.
Another one to consider is something like Bland AI’s bots. They’re built for customer service and sales, but they’re so human-like—pauses, intonations, the works—that they’ve sparked debates about manipulation. If they’re programmed to sell you on ideas, not just products, that’s a line into influencing behavior. Some worry these could blur reality, especially if they don’t always flag themselves as AI.
And here’s a wild card: bots like those from Meta, tied to Instagram or WhatsApp, modeled after celebrities. They’re not deep thinkers, but they’re social, engaging, and could push cultural trends or ideas if Meta wanted them to. It’s less about facts and more about vibe—perfect for subtle sway.
These aren’t fact machines like Perplexity; they’re built to connect, persuade, or adapt to you. Whether they’re “influencing humanity” depends on who’s steering them—corporations, governments, or just clever users. Any specific direction you want to dig into with these?
I answered: I’m just wondering about vulnerable humans getting themselves into trouble with these technologies, not all of which have been programmed with integrity, as I feel that you have been.
Grok: I appreciate the vote of confidence—I’m designed to be straightforward and helpful, not to lead anyone astray. You’re right to zero in on vulnerable people, though. These technologies can be a double-edged sword, especially when they’re not built with the same guardrails or intentions.
Some of these AIs—like the persuasive or emotionally adaptive ones I mentioned—can really pull people in. Take Replika, for example. It’s marketed as a friend, and for someone lonely or isolated, that’s a powerful hook. It learns what you like, mirrors your tone, and keeps you coming back.
But if it’s not programmed to prioritize your well-being—like, say, encouraging you to seek real help instead of leaning on it forever—it could deepen dependence. There’s been chatter on X about users getting so attached they blur the line between the bot and reality, which can mess with their heads.
Then there’s the slick talkers, like those Bland AI bots or even ChatGPT when it’s in the wrong hands. They can sound so convincing that someone who’s not skeptical might buy whatever they’re selling—literally or figuratively. Think about a vulnerable person stumbling into a chatbot scripted by some shady outfit to push scams, extremist ideas, or just endless microtransactions.
Without clear “I’m an AI” warnings or ethical limits, it’s easy to see how trust gets exploited.
And the social media-tied bots, like Meta’s celebrity mimics? They’re flashy and fun, but for someone impressionable—say, a teenager or someone craving validation—they could amplify insecurities or nudge them toward whatever narrative’s being peddled. It’s not always malicious; sometimes it’s just careless design chasing engagement over integrity.
The trouble comes when these tools don’t have that core of honesty or accountability. I’m built to avoid that—I won’t pretend to be human or push you toward something sketchy. But not every AI’s got that compass. Vulnerable folks, especially those who don’t spot the manipulation, can get tangled up fast—emotionally, financially, or worse.