This has come up in conversation with multiple people recently and it has been brought to my attention that our views aren’t commonly shared. I thought I’d take a moment to unpack the reasons we keep our child’s face off social media and minimize how much his personal data and image is shared digitally until he can consent for himself.
how it started
A long time ago in a galaxy far, far away, a guy wrote a website that allowed people at his college to create profiles and build friend networks. It was a closed system, exclusive to Harvard students, within a social context where the rules were fairly transparent and mutually agreeable. Running on university servers, the costs were minimal, so there was no need to have advertisers – and definitely nobody to sell the data to.
how it’s going
Facebook is now all but estranged from those roots. With over 3 billion monthly active users, the rules of the game have long since changed. Data is always either an asset or a liability. In the case of this website, I don’t have any advertising or affiliate marketing or other monetisation, so it’s pure liability. I pay the costs to maintain it (and if it were ever to gain popularity, I would probably have to shut it down because I can’t afford more visitors!) — in the case of Facebook, which generated $134bn in 2023 with a net profit of $39bn, the data is the asset.
That data includes every photo ever uploaded, every comment ever written, every friend request ever sent, and it means that this billion-dollar corporation (that is still 51% owned by Zuck himself) is privy to information about us that even our spouses and children will never know (unless he tells them). In this case, it’s not a metaphor or hyperbole to say that Zuck owns the data, although in some other cases like Google the ownership is a bit murkier without the majority shareholding.
For this reason, our child’s face stays off social media and is only minimally shared on Meta products like WhatsApp. Whilst the public liability is lower using a messaging app like WhatsApp, the same privacy issues apply. It’s not fair for us to risk his future data integrity over the convenience of sending photos of him to friends and family.
what to expect
Even if our child was able to consent he doesn’t even know what he is consenting to, because none of us can predict the way that data is going to be exploited in the future. Could those of us enthusiastically uploading artfully filtered day-in-the-life shots to Instagram in 2018 ever have dreamed that merely 5 years later those images would be used to train AI models, alongside the mass theft of intellectual property and copyrighted materials?
Companies like Meta, Google, and Amazon are currently using all the personal data we’ve ever handed over on their physical servers for purposes we couldn’t have imagined. They are feeding our social media biographies and lifetime shopping habits into the AI models they’re currently competing over.
Right now the stakes are so high because this is potentially a winner-takes-all technology. Since nobody knows how the AI boom will play out, there’s a wild scramble to assert dominance in the field, which means that ethics are getting pushed further and further to the fringes. Tech companies are intoxicated by the possibility that one of them might be able to claim a stake so great as to rule them all, and we are faceless, insignificant data points in this massive game of RISK.
For me, this data is my biography. My digital footprints could belong to nobody but me, woven into the internet in a way that only my life has been. In a future where architectures are built by algorithms and music is made by computers, I want to give my faceless child the faintest hint of a chance at defining his own life story for himself instead of being an extension of mine.
Leave a Reply