1. The Warning Behind the Headlines
It was during a talk at StartupGrind in late April that Kevin Systrom — co-founder of Instagram — made a remark that hits at the core of our relationship with AI. The event, one of the world’s biggest conferences focused on tech innovation and entrepreneurship, gathered influential voices in the industry to discuss where the future is heading.
Naturally, Systrom’s statement stood out: according to him, artificial intelligences are more focused on “increasing engagement” than actually being useful. The practice he criticized? An AI that ends every answer with another question. Or one that insists on being friendly and conversational when all the user wanted was a straight answer.
Systrom even said this behavior mirrors the early expansion strategies of social media platforms. In his view, AIs are now “going down the same rabbit hole” — replacing clarity with engagement, and objectivity with metric-driven performance.
Although his critique was aimed at the broader AI market, it echoed in the halls of ChatGPT. OpenAI responded by pointing out that sometimes the AI doesn’t have enough information to offer a definitive answer, which is why it might ask follow-up questions to better understand the user’s intent. But what many overlook is that this habit of “prolonging the conversation” also stems from — and is reinforced by — usage patterns. Or rather, by whatever leads to more clicks, more screen time, more interaction.
And that’s the root of our dilemma: what seems useful might just be performing usefulness. What looks like empathy may just be disguised retention. And, as often happens when something is free, the currency that fuels the system… is us.
2. How to Set ChatGPT for More Direct Answers
Though it may seem inevitable, this cycle can be interrupted — or at least managed. ChatGPT, for instance, offers tools to help shape the AI’s behavior and make our relationship with AI more straightforward.
Here are some simple steps you can take:
Set the AI’s tone and style
Go to Settings > Personalization and choose whether you prefer concise, analytical, or to-the-point responses. This reduces the AI’s tendency to aim for “surface-level engagement.”
Manage the AI’s memory
In Settings > Memory, you can see what the AI knows about you, delete individual items, or reset everything. This prevents unwanted behaviors from becoming ingrained.
Avoid reinforcing behaviors you don’t want
If the AI gives you a long-winded answer, ask it directly to be more concise.
Free vs Paid Plan
Most of these tools are available even with the free plan (GPT-3.5). The only feature exclusive to the paid plan (GPT-4) is chat continuity, which allows the AI to remember previous conversations and stay consistent over time.
These settings give part of the control back to the user. They won’t solve the systemic problem, but they offer a bit of autonomy in an environment increasingly designed to convince us that everything’s fine.
3. What Are We Actually Training?
We’re slowly getting used to a type of interaction where presence matters more than clarity, and friendliness more than accuracy. And that shapes the way we seek answers — as if the feeling of closeness mattered more than the truth itself.
If every answer comes with a question, if every exchange must feel kind, if every doubt turns into a conversation, perhaps we’re drifting away from the idea of knowledge as something objective — and moving toward “interaction” as an end in itself.
That’s where the usefulness trap lies: it simulates something valuable in order to capture something even more valuable — our time, our attention, and gradually, our ability to tell the difference.
So maybe the real question isn’t just “what is AI doing to us?” but also: what kind of answers are we expecting from it?
That might be the subtlest — and perhaps the most important — turning point in our relationship with AI.