I have been thinking a lot about digital sovereignty lately and how quickly the internet is turning into a weird blend of surreal slop and centralized control. It feels like we are losing the ability to tell what is real because of how easy it is for trillionaire tech companies to flood our feeds with whatever they want.

Specifically I am curious about what I call “kirkification” which is the way these tools make it trivial to warp a person’s digital identity into a caricature. It starts with a joke or a face swap but it ends with people losing control over how they are perceived online.

If we want to protect ourselves and our local communities from being manipulated by these black box models how do we actually do it?

I want to know if anyone here has tried moving away from the cloud toward sovereign compute. Is hosting our own communication and media solutions actually a viable way to starve these massive models of our data? Can a small town actually manage its own digital utility instead of just being a data farm for big tech?

Also how do we even explain this to normal people who are not extremely online? How can we help neighbors or the elderly recognize when they are being nudged by an algorithm or seeing a digital caricature?

It seems like we should be aiming for a world of a million millionaires rather than just a room full of trillionaires but the technical hurdles like isp throttling and protocol issues make that bridge hard to build.

Has anyone here successfully implemented local first solutions that reduced their reliance on big tech ai? I am looking for ways to foster cognitive immunity and keep our data grounded in meatspace.

  • h333d@lemmy.worldOP
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    You’re absolutely right about the ageism - that was lazy framing on my part. The vulnerability is psychological and universal, not demographic. I’ve watched my technically-savvy friends fall for the same engagement manipulation as anyone else. I respect the hell out of the radical position you’re taking, and you’re correct that it solves the problem for you personally. But for a lot of us here, the threat model isn’t “can I individually opt out” - it’s “how do I minimize harm while participating in systems I can’t fully escape.” I’m 24, unemployed, job searching in tech. Most employers require LinkedIn, GitHub, email. My actual community - the people I game with, the friends who get me - are scattered across the continent. The meatspace-only option isn’t realistic for someone in my position. Alberta doesn’t exactly have the densest scene for the communities I’m part of. So I’m attempting harm reduction: self-hosted Matrix instead of Discord. Jellyfin instead of Spotify. Soju IRC bouncer instead of Slack. My own Proxmox homelab instead of cloud services. It’s not as pure as full disconnection, but it means I’m not feeding OpenAI’s training datasets or Meta’s engagement algorithms with every interaction. Your point about treating followers as “avatars of the same algorithm” is exactly what I’m trying to escape by moving communication to federated and self-hosted protocols. When I’m on my own IRC server or Matrix instance, I’m talking to people, not to a feed curated by an engagement-maximizing black box. The municipal infrastructure angle matters because it scales the individual solution. I worked at a municipal fiber network - we have the infrastructure to host community services. If a small municipality can run Mastodon, Matrix, and Nextcloud for residents, that’s hundreds of people removed from surveillance capitalism. It’s not everyone going full hermit, it’s building parallel infrastructure that respects privacy by default. Your cross-referencing and source verification advice is solid, but it requires people to first recognize they’re in an algorithmic environment. That’s why I think local-first infrastructure matters - it makes the choice explicit rather than defaulted. I hear you on offline community being the real answer. But for those of us who can’t or won’t fully disconnect, reducing the attack surface and building privacy-respecting alternatives feels like the next best thing.

    • mrl1@jlai.lu
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      It’s the way forward, and a somewhat comfortable one at that for people who would rather start a homelab than talking to random humans (including myself). Internet is bound to be corrupt because of it’s inherent lawlessness and political power through mass propaganda. I would advocate for a ban of centralised social media, but that would only be a temporary solution since bots and trolls creep everywhere and communities online might still have a hard time surviving.

      But to fight against the shit flooding, it’s hard to see how you’d do without meatspace option and evidently (as dumb as it may sound) you might want to get involved actively into associations or political activities around you. The high individuality (by that I meant the social atomisation) of the US is why it’s been so susceptible to false information and the far right online propaganda. Real life social fabric is what makes resilience against trolls and AI, and ultimately you’ll only be able to fight the root cause when you’ll be free of that dictator of yours.

      So I am with you, and it’s hard to see at first but you’re not alone thinking like you do and finding groups around where you live to talk and think together is the best thing that can be recommended to anyone.

      Teaching like another comment says would be such an option to consider.