search
Alexander A. Winogradsky Frenkel

Humane inhumanity, structures of indifference

In the digital age, exile is no longer geographic. It can be enacted with a click—swift, silent, and without appeal. Over the recent month, I have experienced a curious form of modern banishment: first suspended, then reinstated, only to be suspended again, and now subjected to a vaguely defined “restriction” on Facebook. At no point was I given a clear reason. No specific content was flagged, no human responded to my inquiries. What remains is a sense of powerlessness—being judged by a machine, sentenced by a ghost.

For fifteen years, I maintained a presence on Facebook – a space where I share insights and sustained personal connections across continents, cultures and creeds. Then, without warning, I was abruptly suspended. No clear explanation. No dialogue, no procedures, no one to call.The ordeal began with an unexplained suspension of my account, followed by a demand for identity verification – even though the platform has held my personal data for nearly two decades.

I was instructed to record a video: “Stare into the camera, turn your head upward, then to the left, then to the right” (sic) – procedures disturbingly reminiscent of law enforcement or incarceration settings. I complied, baffled by the surreal nature of the request, and was told that this biometric video might serve as my profile picture if access were restored.

It worked, momentarily. As someone who communicates in multiple languages, I received reinstatement messages in four tongues, thanking me for my “patience” and confirming that I had respected their elusive “community standards”. I took pictures of each message.

But within two days, the same process repeated itself: another suspension, another video request. Reinstatement eventually came slowly, erratically, seemingly tethered to an opaque Californian timezone. Strangely, I now receive messages prompting me to engage with unfamiliar people whom I don’t know while simultaneously being barred from commenting, liking or accessing their content.

The entire episode feels like an encounter with a digital Golem – an algorithmic entity so vast and unaccountable it seems to mislead even its own creators and managerial teams. This machine, built to regulate, appears to have slipped their grasp, producing outcomes are both arbitrary and oddly consistent in their lawlessness.

The situation is both embattled and strangely captivating: a platform run not by coherent policy, but by malfunction. There are offices, no visible executives, no responsible personnel – only decisions rendered from a timezone on the Pacific side. As if the moon, the earth and the stars are made of green cheese: creators, owners, and clients – all encapsulated in clearly absurd circuits, easy to fool and easier to ignore.

But this is more than a personal inconvenience. It reveals a systemic opacity in how major tech platforms govern user interactions, resolve disputes, and apply their so-called “community rules”. One can be punished, but rarely heard. Dialogue is replaced with automation. Oversight with silence and/or suspicion. This is not merely a personal grievance. It is a symptom of a broader disorder: the growing lawlessness of companies that present themselves as hosts for dialogue and community, yet operate through silence and opacity. Their very structure resists accountability, while their decisions carry real consequences—for relationships, reputations, and livelihoods.

Facebook, like other digital platforms is no longer just a private enterprises. It has become an infrastructure of public life, mediating everything from friendship to political engagement. Its size and reach grant it a kind of borderless sovereignty – yet without the constraints of law or the ethics of public service.

The platform operates on the principle of maximizing profit through the exploitation of its subscribers-members of the so-called “community,” who are, in essence, customers. And yet everyone remains anonymous: one click, and someone appears; another, and they vanish. Anonymity fosters irresponsibility; everything becomes blurred—especially in times of global conflict.

In 1968, I first came across the German phrase Zum Wegwerfen – “to be thrown away.” Everything becomes disposable, subject to destruction. We speak of human rights, yet they are often reduced to acts of erasure… or “a cash giver”! A person’s reputation – once considered sacred – is now virtualized to the point of nonexistence. Today, this is a global phenomenon: digitized, barely conceptualized, reduced to ticking someone away—off, out.

As an Orthodox archpriest in a multi-identity society, I find this utterly opposed to the values embedded in the Semitic languages and Middle-Eastern cultures, where Z-CH-R/זכר – D-KH-R/ܕܒܪ (in Syriac) – DaKiRa/ذاكرة (in Arabic) “to remember”—all evoke a living memory continually renewed through the transmission of life. Not through erasure or banishment.

In this digital realm, a company’s Terms of Service have become a kind constitution. Enforcement is carried out not in public but by algorithmic overseers or anonymous reviewers. Users are not citizens, but data points—granted or denied access, not through transparent rules but hidden criteria. There is no trial, no opportunity for defense, no visible judge. What once was a social network now resembles a Kafkaesque tribunal.

What is perhaps most disturbing is the mismatch between these companies’ public image – brimming with slogans about openness, connection, and dialogue – and the internal reality of automated punishments and closed channels. When problems arise, users meet silence. This is not just inconvenient—it is unjust.

It is tempting to see companies like Facebook as simply tech startups that grew too fast. But that framing understates their power. These are not mere service providers – they are global quasi-governments. They regulate speech, shape public discourse, and define the terms of engagement for billions… They determine visibility, suppress voices, and influence elections—all via proprietary algorithms and behind-the-scenes moderation policies.

Yet, they are accountable to no electorate. They answer to shareholders, not citizens. Their decisions are made without transparency, judicial oversight, or public debate. Unlike democracies, where laws are debated and interpreted in public, these platforms impose rules that are rigid in their enforcement, but fluid in meaning.

This is not merely a governance failure. It is a redefinition of public space by private actors. When the new “public square” belongs to a corporation, the risk is not just censorship—it is the erosion of democratic norms. We are not just users. We are subjects.

At the heart of this paradox lies a brutal irony: these platforms are built on the promise of connection. Their brand is dialogue, empathy, presence. They claim to bring people together, to give voice to the voiceless. But when their systems fail – or when they choose to silence—they become incapable of the very thing they advertise: human conversation.

My own experience of “restriction” is emblematic. I was not banned outright. I was not warned. I was simply muted, constrained, placed in a kind of digital purgatory with no explanation, no appeal. Automated messages vaguely referenced “violations” of community standards, but never explained what they were. Inquiry channels led nowhere. I was left speaking into a void.

This is what I call humane inhumanity: a facade of care that conceals a structure of indifference. A user is not treated as a person but as a statistical anomaly to be managed. There is no room for context, no acknowledgement of error, no gesture toward reconciliation.

True dialogue demands presence. It requires listening, vulnerability, and a mutual commitment to truth. The refusal to engage – to explain, to acknowledge—is not a failure of customer service. It is a moral failure. And it strikes at the core of what these companies claim to be.

At the heart of this paradox lies a brutal irony: these platforms are built on the premise of connection. Their brand is dialogue, empathy, presence. They claim to bring people together, to give voice to the voiceless. And yet, when their systems fail—or worse, when they choose to silence—they become incapable of the very thing they market: human conversation.

My own experience of “restriction” is emblematic. I was not banned outright. I was not warned. I was simply muted, constrained, placed in a kind of digital purgatory with no explanation, no appeal. Automated messages gestured vaguely at “violations” of community standards, but never specified what those were. The channels for inquiry led nowhere. I was left speaking into a void.

If platforms now function as arbiters of speech and social order, we can no longer treat them as private businesses immune to public responsibility. We need a new digital ethic – one that insists on transparency, fairness, and accountability.

This is not abstract. Concrete proposals already exist:

The European Union’s Digital Services Act requires transparency in content moderation and algorithmic decisions.

The Appeals Centre Europe serves as an independent dispute resolution body, certified by Ireland’s media commission under Article 21 of Regulation (EU) 2022/2065.
Civil society groups and legal scholars are calling for Digital Rights Charters to guarantee users the right to explanations, appeal, and due process. (See: EDRi Digital Rights Charter).

But these efforts must grow – and become global. A democratic digital space cannot exist if users are treated merely as data points. There must be visibility. There must be recourse. There must be dialogue.

It will not come from the platforms themselves. Left alone, the incentives of profit and control will always outweigh justice. The push must come from outside – from lawmakers, academics, and above all, from users who have been silenced.

The original promise of social media was simple: that every voice could be heard. Each person is real, not a mirror for wandering egos, not to be faked, captured, or discarded. Through networks, we could share, connect, and converse across borders of geography, language, and power.

But when the structures hosting those voices become unaccountable and silent, that promise collapses. And, in return, even the platforms’ creators lose their freedom, their authenticity, and, curiously, their financial and spiritual values.

What I experienced was not simply a “restriction.” It was a rupture in trust. It reveals that behind the cheerful slogans lies a system designed not to hear but to mute, not to understand but to manage.

And yet, I write this in insistence. The right to dialogue is not granted by an algorithm. It is a human need—and a human right. We must speak, write, and organize not only in defiance of silence, but in defense of a future where communication is not a commodity, but a cornerstone of freedom.

About the Author
Alexander is a psycho-linguist specializing in bi-multi-linguistics and Yiddish. He is a Talmudist, comparative theologian, and logotherapist. He is a professor of Compared Judaism and Christian heritages, Archpriest of the Orthodox Church of Jerusalem, and International Counselor.