Facebook Developer’s Conference F8 – 2019: My Takes.

At the Facebook Developers Conference (F8) in San Jose, one word dominated the conversation: privacy.

Across keynotes and developer sessions, the message was consistent—Facebook (now Meta) is emphasizing self-policing, internal regulation, and artificial intelligence as mechanisms to manage harmful content, hate speech, and platform safety.

On the surface, this sounds reasonable. Necessary, even.

But beneath that surface lies a much deeper and more troubling question—one that is not technical, mathematical, or even purely political.

It is fundamentally ethical.

This Is Not an Engineering Problem

When platforms talk about using AI to regulate content, the framing often implies that this is an optimization problem:

  • Better models
  • Better classifiers
  • Better enforcement

But content moderation is not merely an engineering challenge.

It is a problem of moral judgment.

Words like harmful, safe, and hate are not neutral terms. They carry value assumptions. They presuppose definitions of good and bad, right and wrong—definitions that societies have historically struggled to agree upon even with centuries of philosophy, law, and democratic debate.

To then codify those moral judgments into algorithms—and automate their enforcement at global scale—is unprecedented.

When Platforms Become Police and Legislators

Imagine a corporation that owns all public space in a city. Every wall. Every square. Every place where people speak.

Now imagine that corporation not only polices behavior in those spaces, but also decides what speech is permissible—and writes the rules governing it.

At that point, the corporation is no longer just a platform. It becomes:

  • Legislator
  • Judge
  • Enforcer

Democratic societies typically attempt to solve this problem through collective deliberation, imperfect as it may be. Corporations, however, operate under entirely different incentives—engagement, monetization, legal exposure, and public relations.

And unlike governments, their decision-making processes are opaque.

The Neutral Platform Myth

For years, Facebook’s position—articulated by Mark Zuckerberg—was that it was a neutral platform. Users generate content; the platform merely hosts it.

That position becomes harder to sustain when the platform:

  • Actively monetizes engagement
  • Algorithmically amplifies certain content
  • Shapes visibility, reach, and distribution

A useful analogy is this: we don’t sue phone companies for conversations criminals have over the phone. But phone companies are not monetizing individual conversations, prioritizing certain calls, or suppressing others.

Social platforms do all three.

Values Are Already Embedded—Whether We Admit It or Not

Nothing about a news feed is neutral.

Every ranking decision reflects values:

  • What should be seen?
  • What should be suppressed?
  • What deserves amplification?

Once those decisions are encoded into AI systems, values stop being debated and start being executed automatically.

That is where the real danger lies—not in the intention to reduce harm, but in the quiet solidification of moral assumptions into code.

Reverse-Engineering Society From the Top Down

Technology has forced us into an unusual position.

Instead of building systems from shared philosophical foundations upward, we are now reverse-engineering ethics after the fact—trying to patch moral reasoning onto infrastructures already operating at planetary scale.

This tension will not remain contained within tech companies. It will provoke political backlash, regulatory responses, and cultural conflict. We are already seeing the early stages of that reaction.

There Are No Easy Answers—Only Tradeoffs

There are clearly things that should not be shared. But even at the margins—where consensus seems obvious—edge cases quickly appear.

Who decides?
By what authority?
According to whose values?
And with what recourse when decisions are wrong?

When a handful of companies connect billions of people, they must either:

  1. Become part of a regulated public system, or
  2. Step back entirely and act as neutral infrastructure

Trying to occupy both positions simultaneously creates instability—for platforms and societies alike.

The Question That Matters Most

The real question is not whether platforms should moderate.

It is who gets to decide what “good” means—and how that decision gets translated into systems that quietly shape human interaction at scale.

We don’t yet know what the answer will be.

But the consequences will be profound.


FAQs

Why is content moderation an ethical issue and not just a technical one?

Because moderation requires moral judgments about harm, safety, and intent—concepts that cannot be objectively defined or optimized without value assumptions.

Can artificial intelligence fairly regulate speech?

AI can enforce rules consistently, but it cannot determine whether the rules themselves are just, ethical, or culturally appropriate.

Are social media platforms neutral?

No. Algorithms prioritize, suppress, and amplify content based on engagement and business incentives, embedding values into distribution systems.

How is social media different from phone companies?

Phone companies do not monetize individual conversations or algorithmically prioritize speech. Social platforms do.

What happens if corporations define acceptable speech?

They effectively become lawmakers and enforcers without democratic accountability, creating long-term societal risks.



If you want help making sense of complex systems and their implications, email me at [email protected] or book a call:

This essay is part of a broader exploration of meaning series.
Explore related pieces in the meaning knowledge branch: https://gabebautista.com/essays/meaning/