search
Carmen Dal Monte
A minority is compelled to think

It’s not AI following you. You’re following it: the Netflix case

In my work with my startup, I study how AI shapes our choices every day. Netflix is one of the most emblematic cases.

There’s something extraordinary — and at the same time subtly unsettling — about the way Netflix tries to get to know us. It’s not just the algorithms that learn our viewing habits. That’s the second step. The first, far more interesting one, is that Netflix asks us questions. Simple, direct questions. We find them when we open a new profile or get a recommendation: “Did you like this movie?”, “Do you prefer action or comedy?”, “What do you want to watch now?”

Seemingly harmless questions. But in reality, they are carefully designed to elicit a response that doesn’t just reflect who we are. The response modifies the system. But it also modifies us.

Answering a questionnaire is an act of trust. It suggests that someone is willing to listen and adapt to us. But in Netflix’s case, the relationship is reversed. It’s not the platform that adapts. It’s the user who gradually aligns with the path the algorithm has laid out. The questionnaire — or rather, the interaction mechanism — becomes a lever of transformation. The identity we build by answering is not just a mirror; it’s a planting. Netflix doesn’t just profile us: it cultivates us.

One of the most sophisticated tools Netflix uses to drive this transformation is visual personalization. The same content is presented with different trailers and thumbnails, depending on who’s watching. The very same film might appear with a romantic cover or a dramatic image. A user drawn to female protagonists might see Uma Thurman front and center in Pulp Fiction, while another, more responsive to male charisma, will see John Travolta. Neither is lying. But both are looking at the same content through a narrative already tailored to them. Personalization doesn’t change what is offered. It changes how it’s told to us.

And that’s the subtle point. The data we provide is never “neutral.” Every answer we give is already framed by a set of predefined options, by a digital context that gently nudges us toward certain choices and renders others invisible. On the surface, we are the ones choosing. In reality, we are being accompanied — gracefully — toward an algorithmic identity that Netflix can predict, nurture, and monetize.

In the world of personalized entertainment, each user becomes the protagonist of a tailor-made narrative. But beware: it’s a story with no room for error. Every “like” reinforces a trajectory. Every viewing affirms a hypothesis. The result is a form of automatic self-representation: Netflix offers us a way to tell ourselves who we are, but in the language it has already prepared. A simplified language — but a highly effective one — where nuances disappear and categories solidify.

Perhaps this is where the most interesting part of the story lies. Platforms like Netflix don’t just catalogue our preferences. They offer desires. They suggest ways of being. They shape the imagination. When we answer a question, we’re not merely stating what we think. More often than we realize, we’re choosing between pre-packaged identities. And once that choice is made, the algorithm walks with us, step by step, down the path we’ve — apparently — chosen.

Netflix asks who we are. But what it’s really doing is guiding us toward who we might become. And that transformation happens one question at a time.

It’s not AI following you. You’re following it: the Netflix case
by Carmen Dal Monte

In my work with my startup, I study how AI shapes our choices every day. Netflix is one of the most emblematic cases.

There’s something extraordinary — and at the same time subtly unsettling — about the way Netflix tries to get to know us. It’s not just the algorithms that learn our viewing habits. That’s the second step. The first, far more interesting one, is that Netflix asks us questions. Simple, direct questions. We find them when we open a new profile or get a recommendation: “Did you like this movie?”, “Do you prefer action or comedy?”, “What do you want to watch now?”

Seemingly harmless questions. But in reality, they are carefully designed to elicit a response that doesn’t just reflect who we are. The response modifies the system. But it also modifies us.

Answering a questionnaire is an act of trust. It suggests that someone is willing to listen and adapt to us. But in Netflix’s case, the relationship is reversed. It’s not the platform that adapts. It’s the user who gradually aligns with the path the algorithm has laid out. The questionnaire — or rather, the interaction mechanism — becomes a lever of transformation. The identity we build by answering is not just a mirror; it’s a planting. Netflix doesn’t just profile us: it cultivates us.

One of the most sophisticated tools Netflix uses to drive this transformation is visual personalization. The same content is presented with different trailers and thumbnails, depending on who’s watching. The very same film might appear with a romantic cover or a dramatic image. A user drawn to female protagonists might see Uma Thurman front and center in Pulp Fiction, while another, more responsive to male charisma, will see John Travolta. Neither is lying. But both are looking at the same content through a narrative already tailored to them. Personalization doesn’t change what is offered. It changes how it’s told to us.

And that’s the subtle point. The data we provide is never “neutral.” Every answer we give is already framed by a set of predefined options, by a digital context that gently nudges us toward certain choices and renders others invisible. On the surface, we are the ones choosing. In reality, we are being accompanied — gracefully — toward an algorithmic identity that Netflix can predict, nurture, and monetize.

In the world of personalized entertainment, each user becomes the protagonist of a tailor-made narrative. But beware: it’s a story with no room for error. Every “like” reinforces a trajectory. Every viewing affirms a hypothesis. The result is a form of automatic self-representation: Netflix offers us a way to tell ourselves who we are, but in the language it has already prepared. A simplified language — but a highly effective one — where nuances disappear and categories solidify.

Perhaps this is where the most interesting part of the story lies. Platforms like Netflix don’t just catalogue our preferences. They offer desires. They suggest ways of being. They shape the imagination. When we answer a question, we’re not merely stating what we think. More often than we realize, we’re choosing between pre-packaged identities. And once that choice is made, the algorithm walks with us, step by step, down the path we’ve — apparently — chosen.

Netflix asks who we are. But what it’s really doing is guiding us toward who we might become. And that transformation happens one question at a time.

About the Author
Carmen Dal Monte (PhD), is an Italian entrepreneur and Jewish community leader. Founder and CEO of an AI startup, she is also president of the Jewish Reform Community Or 'Ammim, in Bologna.
Related Topics
Related Posts