Writing in 1989, moral philosopher Sissela Bok tells us:
Imagine a society, no matter how ideal in other respects, where word and gesture could never be counted upon. Questions asked, answers given, information exchanged—all would be worthless. Were all statements randomly truthful or deceptive, action and choice would be undermined from the outset. There must be a minimal degree of trust in communication for language and action to be more than stabs in the dark. This is why some level of truthfulness has always been seen as essential to human society, no matter how deficient the observance of other moral principles. Even the devils themselves, as Samuel Johnson said, do not lie to one another, since the society of Hell could not subsist without truth any more than others
When I look at my screen 30 years later, I see this effect -this collapse- all the time. And the only one I’m comfortable as a moral agent talking about is Baby Getting Cheesed.
Last Saturday I had a moment of pause, so I looked at my screen. The screen showed me an adult person taking a slice of yellow cheese and tossing it on a surprised baby’s face. It made a wet plop sound as it stuck to the startled baby’s face. The video ended.
I didn’t feel upset or outraged by the act itself. It was, in a way, cute and tugged my dad heart-strings. I remember my son at that age. I wouldn’t have tossed a cheese slice on his face, but I played little games with him, like pretending I’d eat his foot, just so I could get a laugh or smile out of him. The cheese slice on baby face schtick was odd, but it was also endearing in a way.
What bothered me most about the baby getting cheesed was that someone -or perhaps the algorithm itself- had decided to put it on my screen. To get me to consume it. To please me and get me to share it. I realized instantly why the baby getting cheesed had upset me: I was staring a moral hazard in the face.
In general, tossing cheese slices onto the faces of babies is a Bad Thing. It’s not something you or I, as moral adults, would encourage. It’s not something we’d do in our own homes. It’s not something we’d do to our friends’ children, our grandchildren, our niece or nephew. It’s not something any baby care book would recommend. You’d be hard-pressed to find a parenting or caregiver expert to tell you that throwing a cheese slice on a baby’s face was a Good Thing To Do. Yet here I was, looking at my screen, reading a piece that voiced great hilarity and mirth at the baby getting cheesed. The video had been viewed 8 million times, and dozens of copy-cats videos had been made, the writer told me. Most replicas were made by parents. Like me. The whole thing had gone viral in the words of the privatized commons.
That horrified me. So I asked myself why baby getting cheesed had gone viral?
I’m no behavioral scientist, my credentials in science, law, and or sociology are pitiful. What I do know a lot about is how people use technology, and what might motivate the ways they use it. And I know how to use my sense of morality in public and private spaces.
Knowing that most people, in the privacy of their homes or out in public with their child, would elect not to throw cheese on their babys’ faces, or celebrate that others had done so, I realized that the behavior I was seeing on my screen was being induced by something. Encouraged by an unseen hand. By some perverse economic logic at work there, in my screen.
It was being encouraged by the app itself. In my case, that app was Twitter. But it doesn’t really matter. All the apps encourage sharing. They live and die by what we share. And they reward us for sharing. In Twitter’s case, the reward is a value-less form of currency: a like, or a retweet, or maybe a reply. All of these things are bundled up and re-named from what they were (verbs signifying operation-actions on an item of information) into something new: engagement.
Engagement is the coin of the realm of our screens. It’s the engine celebrated by the bit-tycoons and those who write about them for a living. It’s the core economic logic in our screens. To keep us engaged. To further that engagement. To take more of our attention. To ✨razzle dazzle✨ us with pleasing animations and unique experiences.
And also, to get us to do things we wouldn’t normally do.
Notice the deception therein. As people, as normal moral beings in a real physical place, we’d probably not cheese the baby’s face, and, more than that, we’d also probably condemn or shun others who did so. We sure as hell would not yield to a corporation asking us to throw cheese on our baby’s face, film it, and then put on screens all over the world.
But in the deceptive hall of mirrors that is social, -where sharing is effortless and the twin to the moral hazards it produces- we do exactly that. In the real world, we grab a slice of yellow cheese from the fridge, and toss it on the baby’s face, then upload the video. For nothing and no reason at all except to accrue a meaningless currency.
To top it all off, the original cheese video -supposedly posted by a brother of the baby- was itself a deception. It had been downloaded and stolen from Facebook. Again: why? To perform. To steal a little authenticity for the purpose of accruing likes.
I think we’re in dangerous territory here. My sense is that this un-virtuous cycle could devolve very quickly into chaos. We’re seeing more and more bad actors utilize these exploitative software systems to amplify -and indeed induce- bad behavior. The same thing happened with the Momo hoax, which is now no longer a hoax, but a very real self-harm thing frightening parents of 3rd graders at my kid’s school. These patterns seem similar to me to the ones that preceded violence in Myanmar and India. And that’s frightening.
Most importantly, we can’t depend on any of these apps to regulate or modify the inducement logic behind the behavior their users exhibit. The app makers benefit from inducing certain behaviors in us. We should have learned that lesson as far back as 2016. We should have learned it in 2017 and 2018, especially after violence took people’s lives in Myanmar. But app makers have no interest in fixing this, and there’s no reason to trust them to fix this as they’ve let us down so many times already. We’ve seen the app makers spread lies, apologize for consequences and yet engagement keeps rising. They have no incentive to fix this; in fact, engagement forces the opposite logic on these businesses. Don’t fix it. Let it spread. We’re making money, so who cares?
Bok, writing with moral clarity and force, warns us again:
A society, then, whose members were unable to distinguish truthful messages from deceptive ones, would collapse. But even before such a general collapse, individual choice and survival would be imperiled. The search for food and shelter could depend on no expectations from others. A warning that a well was poisoned or a plea for help in an accident would come to be ignored unless independent confirmation could be found. All our choices depend on our estimates of what is the case; these estimates must in turn often rely on information from others. Lies distort this information and therefore our situation as we rerceive it, as well as our choices. A lie, in Hartmann’s words, “injures the deceived person in his life; it leads him astray.”