world-alienation

Been reading a lot of Hannah Arendt lately, partly through other writers, and a quote from this passage in her essay “The Concept of History” keeps popping up:

The modern age, with its growing world-alienation, has led to a situation where man, wherever he goes, encounters only himself. All the processes of the earth and the universe have revealed themselves either as man-made or as potentially man-made. These processes, after having devoured, as it were, the solid objectivity of the given, ended by rendering meaningless the one over-all process which originally was conceived in order to give meaning to them, and to act, so to speak, as the eternal time-space into which they could all flow and thus be rid of their mutual conflicts and exclusiveness. This is what happened to our concept of history, as it happened to our concept of nature. In the situation of radical world-alienation, neither history nor nature is at all conceivable. This twofold loss of the world—the loss of nature and the loss of human artifice in the widest sense, which would include all history—has left behind it a society of men who, without a common world which would at once relate and separate them, either live in desperate lonely separation or are pressed together into a mass. For a mass-society is nothing more than that kind of organized living which automatically establishes itself among human beings who are still related to one another but have lost the world once common to all of them.

Why We Drive

I’ve never been much of an “Amen!” shouter, but Matthew Crawford’s Why We Drive: Toward a Philosophy of the Open Road, very nearly turned me into one. I don’t remember the last time I was more onboard with the ethos of a book.

I think the book is pretty engaging for almost anyone paying attention these days, but I found it deeply relatable on nearly every single page. (With one glaring exception, which I’ll get to.) At one point, Crawford describes an online video of a motorcycle stunt rider getting kicked out of a parking lot by a guy with a clipboard. And the funny thing is, though it’s been over 15 years since I caused any real trouble with a motorcycle, I have been in that exact situation, many times, in many parking lots.

Here are a couple of pictures from that bygone era.


I think I can say that I remember those days fondly, but I don’t miss them. It was an extraordinary thing having an obsession that occupied nearly every day for about four years — even a less-than-supremely-intelligent one like this that nearly got me killed on several occasions. (An 80-mph faceplant on Rt. 84 Texas asphalt is about as enjoyable as it sounds. Thank God for helmets and five-hundred-dollar leather jackets.) I loved bikes, and I learned a lot in that time, but looking back all I can see is something that can only really be enjoyed with a certain (high) amount of youthful obliviousness to any grander picture.

Another thing that drew me to the book is the experience of sitting at a red light when there is no one else around and waiting for the color of green permission. This has often struck me as one of those bullet points on a list titled “This Is Why Aliens Don’t Talk to Us.” I can remember driving home from late shifts at the hospital and coming up to a complete stop at a new light on an empty Rt. 27 at one o’clock in the morning. Then waiting . . . and waiting. Growing rage in the glowing red glare, thinking, “There is absolutely no good reason why I shouldn’t keep driving!” Even if there was a police officer watching me sit there (obediently) at that red light, with no other moving cars or pedestrians within a half-mile radius, I can imagine him tapping his partner on the shoulder and saying, “Look at this idiot just sitting there.”

That was years ago, but I still haven’t quite gotten over it. And it’s not uncommon now for me to make up my own damn mind at many of these traffic lights. I will always stop, and I will never proceed straight through a lighted intersection even after a stop. (Unless, of course, it’s one o’clock in the morning.) I get and appreciate the safety and efficiency they provide. (Though Crawford is not short on examples of the money they provide as well.) But one thing I will almost never do, at any hour of the day, is sit at red left arrow that is telling me to stay stopped when it should be telling me to yield to oncoming traffic as I turn. I am not a robot, and I was not made and did not evolve to act like one.

Meghan actually gives me a fair amount of shit for this. (Though, she does this with a twinkle in her eye that tells me that it’s actually okay and she understands and I’m right and I don’t have to apologize to the traffic light.) We affectionately call this sort of thing my “caveman,” referring to an inner dialogue of sorts, wherein I imagine what some ancient Homo sapiens sapiens might think of some modern behavior or habit. [Insert classic commercial link] This does not always result in the sort of “rebellious” thing that you’ll find in Crawford’s book. My “caveman” is, for instance, perfectly onboard with the anti-straw movement. It’s good that it happens to be environmentally friendly and it’s irrelevant whether or not it makes a measurable difference. The caveman simply looks at me, sitting at a table at some restaurant and being delivered a nice sturdy cup (that could last my entire life), filled with fresh water (that is almost certainly perfectly healthy and refreshing), nicely cooled with ice (if you’re into that), and looking up at the deliverer of this wonderful cup of water and asking with pure clueless and spoiled modernism, “Could I please have a straw?” It’s at this point that the caveman either (justly) kills me for being a danger to the community, slaps the drink out of my underserving hands, or rolls his eyes and walks away, never to speak to me again. He also tells me that I’m ridiculously pampered for needing to be given disposable plastic cups and spoons everywhere I go. But you get the idea.

(Just to be clear, I am aware that the use of drinking straws has very ancient routes. My “caveman” is not anti-straw per se. He’s just anti-“where’s my straw?” I trust you can see the difference.)

None of this is to say that I look down on anyone who uses a straw at a restaurant. Or that if a straw is placed in front of me, I won’t pick it up and use it myself. And, frankly, it’s not to say that you will never find me sitting dumbly patiently in a left-turn lane at a red arrow, not willing to risk the ticket or just too lazy to trust my own senses and make the a priori greenless turn. I’m just saying that when I do any of these things — auto-insist on straws, assume the supply of throw-away cups, relinquish active thought to overhanging light bulbs — I have an inner caveman who makes me question the act. And that that same inner, anti-disposable-cups-and-straws-for-life caveman and I really liked Crawford’s book.

The major theme of Why We Drive is the way in which forms of automation (yes, including the automation of traffic lights, though these are very low on the automation spectrum) not only take over for human actions and capacities, but abolish all the human potentiality of which it necessarily knows nothing.

We don’t give much thought to the moments of release that open up when there is some slack in the plan, but I think we would miss them if our movements were more thoroughly coordinated. Sometimes, what you are doing when you drive your car isn’t very well captured by the word “transportation,” which suggests a simple point-to-point goal to be achieved with maximum efficiency. Such simplifications have always been the price paid for bringing new domains under technocratic control.

In the same way that my vocabulary — and my whole experience of language — necessarily shrinks when I Google the definition of a word instead of searching the pages of a dictionary or asking another human, so also my experience of movement in the world — be it walking or biking or driving — is reduced, in capacity and actuality, when it is automated. (And this movement is reduced in advance when the idea of it is reduced to a word like “transportation.”) Google could map the entire universe and put it at my fingertips; it would still be a shrinking of my human experience of it. (“For profit. For profit. For profit. For profit.”) Rather than removing us from our collective tasks, if technology is to be rightly understood and humanly beneficial, it should be something that, as Crawford puts it, “amplifies our embodied capacities.”

That last line is very reminiscent of Sarah Hendren’s book What Can a Body Do? How We Meet the Built World. (Both these books were published around the same time in 2020.) Hendren does not necessarily share Crawford’s interest in the ways a custom-remodeled VW can alter your driving experience. But she is very much interested in the ways we use (and view!) technology as a means of amplifying our embodied capacities.

In fact, Hendren goes further, saying “The body-plus may actually be the human’s truest state.” Every tool we use becomes an extension of our embodied capacities, and we are almost never not using some tool or other, be it a pencil or a car or a computer.

But, the example of a computer as an extension of our embodied capacities can be problematic. Crawford at one point laments that “technology” has become something of a dirty word, almost always now meaning “tech devices.”

But really, we’re not referring to anything material. Rather, what makes a device “tech” is that it serves as a portal to bureaucracy. You cannot use them without involving yourself with large organizations, each with a quasi monopoly in its domain.

Contrast that to the traditional automobile:

The automobile is a thing, not a device in the sense we are exploring here. It simply is what it is, what it appears to be: an inanimate machine that obeys the laws of physics. You can use it without involving yourself with an office building full of people at some undisclosed location.

We may tend to think of our iPhones as glorified, modern versions of Bell’s Box Telephone or even AOL Instant Messenger. But the thing that makes them “tech” or “devices” is that they are not primarily these things. It might be better to call them glorified versions of slot machines — data-collecting slot machines designed to make you think they’re something else, while they harness, manipulate, and sell every second of attention they can get without any restraint whatsoever. That’s not technology; that’s “technopoly.”

But I’m headed down a rabbit hole, one filled with soap boxes. So let me turn back around.

“Technology” needs a significantly renewed, and broadly understood, definition. And Hendren has her own call for that redefinition. She points out that when we look at individual lives, their stories

beg for us to return our attention to the body and the person as the site of infinite adaptation, but they also beg for an expanded definition of technology — not a simple contest of “better” and “best,” but a broader canopy for how bodies meet the world of tools and environments for getting life done.

Not better and better (nor faster and faster), but personal. Hendren goes on to describe what might sound like a dumbing down of technology. Rather than looking at technology as invention and patented innovation, we can can look at the “long arc” of significance and utility in every-day technologies that get overlooked, broadening and enriching the world of simple technologies all around us — and the embodied lives of those who use them.

I went to a medical conference several years ago specifically to hear a talk by a doctor named Dick Bransford. I don’t remember much about being there, but I remember why I went. I had met Dr. Bransford in Liberia a month before. It was my first time on the cleft lip surgery team. Late one night, chatting in the kitchen of a hospital compound near Monrovia, he told me the topic of his lecture was something like “State of the Art vs. State of the Need.” Since that time I have scribbled the phrase “state of the need” in the margins of dozens of books. And I think this is exactly what Hendren is getting at. (I think she and Dr. Bransford would have got along quite well.)

There is a line from Francis Bacon in the opening to one of Hendren’s chapters:

“Neither the naked hand nor the understanding left to itself can effect much. It is by instruments and helps that the work is done, which are as much wanted for the understanding as for the hand.”

The automation of such things as driving, searching, writing, thinking — as opposed to the embodied extension if these activities — necessarily eclipses any bodily or mental benefit otherwise gained, any experience otherwise suffered. You might say that to meet the world through automation is one way to meet the built world. You might even argue that it’s a bad way to meet it. But you might also say (and I certainly think this is correct) that to meet the built world through automation is to not meet the built world, or any world, at all.

As Crawford puts it:

One wonders about the societal effect of delegation at scale, or rather mass absenteeism, through widespread automation and its attendant outsourcing of human agency. What will it mean to stand at one remove from one’s own doings, not episodically, but as a basic feature of living in a world that has been altered in this way? Can one even speak of “doing”?

It is one of my daily tasks to find ways of avoiding the ever-increasing ubiquity of that “basic feature” of modern living. Needless to say, my caveman and I are talking a lot these days.


As for that exception to Crawford’s otherwise inspiring book . . .

Hendren’s book is wonderful, front to back. But Crawford’s has one section (a half-chapter or so) that I think could be thrown out. In fact, it’s a chapter I think I hated, and it’s a good thing it appears half-way through the book. Toward the end of the chapter titled “The Motor Equivalent of War,” Crawford finally gets to the point of that title. It was quite the change to sit down the with the book one early morning and to realize that all those “amens” had vanished. At best, this chapter is unnecessary and a little naïve. At worst, it was written by a man with no experience of war except what he has read and imagined and played with in his mind — something not far from the opinion of a child in love with cars and G.I. Joes, who has never been to war and certainly never held the dying. (Obviously, I do not know this about Crawford, nor am I the least bit inclined to describe his character in this way. Nevertheless, this is the way that some of this chapter sounded to me.)

I could absolutely grant that there is an experience — often idealized but sometimes even realized — where a fighter sees in his “enemy” something of a worthy opponent. And that this “spirit of hostility and friendship combined,” as Crawford, quoting Johan Huizinga, puts it, amounts to something that is (humanly speaking) quite remarkable. It’s difficult not to think of Adam Makos’s A Higher Call, for instance. But stories like that are remarkable in part because they are so unlike what war always is and always has been: the worst that humanity is capable of. That honor and respect and friendship can be found in war, and even between enemies, says absolutely nothing about war as a feature of natural life but is a testament to the potential depths of humanity’s soul any place it happens to find itself.

So while I highly recommend Why We Drive, I also hope that that chapter gets ignored by its readers and thrown out or rewritten in any future editions.

looking for cracks

Justin EH Smith:

Among the several works of classical literature I have read recently, Henry James’s 1886 novel, The Bostonians, is particularly instructive as to the inanity of this modern imperative to stay informed and to adopt public positions on issues. The story unfolds around the beautiful young Verena Tarrant, whose parents have launched her onto the speaking circuit as a gifted orator in defense of the feminist cause. Her father is a mesmeric healer, and early in Verena’s career the public performance begins by his laying hands upon her, which induces a sort of hypnosis in which the girl is better able to traduce her arguments in favour of the cause of women. A patrician by the name of Olive Chancellor, understood by James and all but the most clueless of readers to be a lesbian, attends one of Verena’s performances, falls in love, and pays off the Tarrants in exchange for bringing Verena into her home, in an arrangement known as a “Boston marriage”. But Olive also makes the mistake of bringing along her distant cousin from Mississippi, the smooth lawyer Basil Ransom. Ransom cares not at all for the feminist cause – he finds it ridiculous – but also falls in love with Verena. From the moment they lay eyes on her, Olive and Basil are both “zoomin’” the same girl, and the rest of the story tells of the gripping battle between cousins over their shared love object. As to the substantive issues, James takes the only position worthy of a novelist: chivalry, feminism, the life of the salons, are all just so much human comedy, and the players in this comedy are to be lightly mocked, but also loved. They are human and this is the best they can do.

Social media allows users no opportunity to cultivate a Jamesian disposition to humanity, instead presenting to us the teams captained respectively by Olive and Basil, and acting as if we have all already signed up for the game, and affirmed all of its rules. One compelling reason to decline to play, however, to log off, may be learned from the novel itself: the US cultural wars are best thought of as trench wars, and the trenches have not budged since 1886. Olive and Basil are familiar types, with thousands, perhaps millions, of lesser instances tweeting out their team affiliation each day, except that, in the absence of James’s humane narration, we are given no reason to love them.

It may sound arrogant to say, but Smith’s point, his experience, is one of those things about reading (anything, but especially fiction) that you either get or you don’t. Either way, reading and writing were meant for this kind of strange but wonderful insight. Each of us needs to work on our “humane narration,” to hold out the “possibility of adoration” where we may feel least likely to find it, and to thereby “decline to play the game.” The corollary imperative: Find what prevents you from humanly loving and tolerating the people you don’t love or tolerate, and do something about it.

One of the most fascinating books I’ve read in the last year is Lauren F. Winner’s The Dangers of Christian Practice. Winner quotes Miri Rubin, speaking of what

social pathologies teach us loud and clear: that any attempt to categorize people, to place them in exclusive groups is a lie, and it requires an enormous effort of mendacity and persuasion to keep such lies believable. So much so that no claim can be coherent, that it cracks, and its cracks can become visible to us.

It seems correct to add that any system or technology that works to convince us otherwise — that works to hide the cracks in our sure understanding of others — is equally a lie. There have been a number of articles lately that make it sound like people are finally figuring this out in a significant way.

Speaking of “the mass exodus from Twitter in late 2022,” Smith says that it simply and finally dawned on people that “they deserved better.”

To say that they deserved better is to say that they deserved virtual connections that complemented their human connections rather than warped them. It says that monetised social interaction is not social interaction in any proper sense of the term, and it says that we already have the technologies to build social networks that are not based on data-extraction and profit-driven algorithms.

First of all, that last sentence is hilarious at face value. “You mean there’s a way to build social networks without the web of data and ads and algorithms? Fascinating!” It reminds me of the ad for Apple’s Vision Pro virtual reality living room ski goggles. They’re great, we’re told, because “when someone else is in the room, you can see them, and they can see you.” How nice of technology to deliver such an experience for us.

But that’s not completely fair and I get what Smith is saying. There are much better uses of computer and internet technology than the ones that have been offered by the major platforms and the psychopaths who run them. And one description of what it looks like to “fight back” against every good thing that they by design destroy is to maximize your demonetized human interactions. I am fairly certain that this would have exponentially wonderful effects in any age, but especially in our own, partly because the effects are proportional to the effort needed to accomplish those demonetized human interactions — which is substantial. (And on that note, I will simply point you to the year of anti-inflation.)

So far, I’m not convinced that anything significant has changed. Beside the fact that most of the people Smith has in mind are just leaving one degrading social network for another, I still don’t know what good knowing all of this does for us. It seems important to note that “deserves better” should often — and certainly in this case, does — mean “deserves better than what they/we give to ourselves.” It’s not only (if even mainly) about what tech companies do. The quantity of people willing to make any major changes even by way of minor inconvenience still seems like a very small number.

Nonetheless, the fact remains, we need more face-to-face interactions, more time demonetized, unplugged from any data production or consumption. We need these in and of themselves, but as Smith points out, we also need them if our online lives are going to have anything real to reflect or project.

At its best, the online component of a social network should really only be a representation of a network that is anchored in a different sort of reality. The online component cannot be the thing itself, but only a visualisation of it, a record of a life lived elsewhere than through the screen.

Needless to say, the online component has become “the thing itself.” (Preceded, of course, by the televised and radioed component.) It’s the thing we wake up to, the thing we check more-or-less constantly, keeping us perpetually distracted, reflexively erasing every pause throughout the day and straight through the evening, to the moment before we close our eyes, sleep, and repeat. It’s amazing to think, evolutionarily or anthropologically, what humans have endured and experienced in nature, and what we’ve come to.

Toward the end of William James’s famous speech/essay “What Makes Life Significant,” he offered a summary of what he was getting at by quoting Fitz-James Stephens:

Progress and science may perhaps enable untold millions to live and die without a care, without a pang, without an anxiety. They will have a pleasant passage and plenty of brilliant conversation. They will wonder that men ever believed at all in clanging fights and blazing towns and sinking ships and praying hands; and, when they come to the end of their course, they will go their way, and the place thereof will know them no more. But it seems unlikely that they will have such a knowledge of the great ocean on which they sail, with its storms and wrecks, its currents and icebergs, its huge waves and mighty winds, as those who battled with it for years together in the little craft, which, if they had few other merits, brought those who navigated them full into the presence of time and eternity, their maker and themselves, and forced them to have some definite view of their relations to them and to each other.

It should be clear that the very last thing that social media — and “tech” in general — has ever done or will ever do is force any of us to have some “definite view” of our relations to each other or to the real world. It’s specific purpose — once again, by design — is the exact opposite. But we don’t think of it this way. Not at all. It flies under the radar as just another necessary step of technology or another neutral medium of communication.

Of course, one can’t say the words “medium of communication” without summoning the spirit of Neil Postman. In his 1985 book Amusing Ourselves to Death, he points out that almost every modern change in technology and communication has occurred with neither public conversation nor private resistance.

But what is happening in America is not the design of an articulated ideology. … It comes as the unintended consequence of a dramatic change in our modes of public conversation. But it is an ideology nonetheless, for it imposes a way of life, a set of relations, among people and ideas, about which there has been no consensus, no discussion and no opposition. Only compliance. Public consciousness has not yet assimilated the point that technology is ideology. This, in spite of the fact that before our very eyes technology has altered every aspect of life in America during the past eighty years.

That was in 1985. We’ve had nearly 40 more years of technological change, and not an ounce more of public conversation, let alone of “public consciousness.” Only compliance.

I see no reason why online social networks should be anything more than “records of a life lived elsewhere.” And I do think it’s possible to have social networks that actually work to be online representation of reality. I think micro.blog manages something very close to this. The caveat for me is that I know no one else on micro.blog. And I mean no one. It gives me something like a quasi relationship with a few people a month who happen to comment on something I post, or vice versa. I’m not complaining. I’m not looking for much more online interaction than what I currently have. (Though I enjoy finding some folks worth “following,” and in terms of a healthier way to interact online with strangers, it is wonderful.) I’m just saying that — again, for me — even the best version of online social networking fails to achieve much of an accurate representation of reality.

Since most of the people who actually benefit from it have something to sell, maybe the reality of online life is this: For the average individual, there is no online option. When it comes to being online, everyone I know will either do the thing that everyone else is doing (Twitter, Facebook, Instagram) or they will do nothing. Which means, for every single practical purpose, the focus needs to be on that “nothing.” To use William James’s phrase, online life will at best “touch only the surface of the show.” The rest — the nothing-to-do-with-the-internet — is “the great ocean on which [we] sail, with its storms and wrecks, its currents and icebergs, its huge waves and mighty winds.”

I can use the technology to provide a space for sharing (e.g. commonplace blog, newsletter), but I will not find online “community” and I will not share in any sort of meaningful reality there. And maybe that’s just the entire nature of reality itself telling me that there is a giant crack in our experience of each other and of the world, an obvious and actively overlooked incoherence in it all that just needs to stop being ignored.

the need for human weakness

Alan Jacobs:

Those who look forward to a future of increasing technological manipulation of human beings, and of other biological organisms, always imagine themselves as the Controllers, not the controlled; they always identify with the position of power. And so they forget evolutionary history, they forget biology, they forget the disasters that can come from following the Oppenheimer Principle — they forget everything that might serve to remind them of constraints on the power they have … or fondly imagine they have.

Gilbert Meilaender:

And we might, were we at all sympathetic to the “opponents” of transhumanist desire, wonder whether it would have been better to remain human—characterized by a needy openness that exists only by virtue of constant exchanges with a world we do not master—even if our capacities were fewer, our status (in some sense) lower, and our suffering greater.

outplaying individualism — and everything else

John Milbank:

[S]ince Christ’s personhood was displayed most forcefully in his sacrificial death, even the death of a Christian civilization may exhibit instances of Christian witness and hold out the hope of resurrection.

It is in this light that one ventures to assert that, for the Christian or Christian-formed outlook, civilizational decline is always ironic. Although it betokens separation and therefore evil, for the personalist perspective separation is always secretly outplayed by further personification and further unification since, from the outset, the only fullness of unity lay in manifold self-expression and the only hope of return to the One lay in linking the fullness of this self-expression—which may well be fully achieved in disintegration—back to a loving unity with the One itself.

In the light of this ironic attitude, the post-Christian project of liberalism can appear only to be parody. The very idea of grounding security upon the isolated individual could have occurred only to people emerging from a personalist legacy, however much they have subverted its real truth. Similarly, the idea of founding order upon disorder is a parody of the message of the Cross.

. . . The personalism of theophanic character taken as displaying the transcendent absolute, or God, is much more individualistic than individualism, and so can outplay it. The self-immolation of the Cross is much more disintegrating than decadence and so can also outplay it through the enactment of total self-sacrifice, self-giving, and self-surrender. The self-assertion involved in expressive giving (even to self-destruction) is much more freely expressive of “right” than are rights themselves, and much more acceptable, since the genuine personal gift must, by definition, cohere with every other gift, whereas a “human right,” being only by definition a self-assertion, might not. A house built upon the sand of antagonism, even regulated antagonism, is doomed to fall, but no authentically different reality really stands against anything else. Hence Christianity, perhaps unlike any other creed, has nothing ultimately to fear from release: Deeper than every jolt lies the confirmation of benign recoil. That is the core of the Christian metaphysical trust in the nonultimacy of ontological violence, the ultimate peaceability of being or reality as such.

In one sense, the message here is simply not to give up, but to retain hope in the face of the most extreme-seeming disaster. But in another it is to have trust that the Christian process is still elaborating and unravelling itself. If no other civilization after ours is in prospect, then that may indeed be because Christianity is the final civilization. There can be no further disclosure of the divine after that of simply the human as such, at the cosmic center: No rationalism or materialism can overtake it without a dualistic denial of the centrality of the symbolic and the liturgical and their mediation by feeling in human historical existence. Jesus removed our interpersonal, social, symbolic, and gift-exchanging existence away from their being embedded in political and purchasing power. He demanded instead that they be embedded in the social and then subordinated to the social and interpersonal as much as possible.

motivations

Freddie deBoer:

And I think people should ask themselves, can I keep doing this kind of advocacy if I have to stop pretending that [my opponents] are motivated by bad moral character? Would I even want to?

um, yes, worse than potatoes, ffs

Jonathan Haidt, being right about social media (again):

Despite years of heated debate, a consensus has now emerged about just how large the correlation is between social media use and mood disorders. In the SCA paper Twenge, Lozano, Cummins and I wrote, we compared the association of social media time with mental illness to other variables found in the same datasets. In that same UK dataset, mood disorders were more closely associated with social media use than with marijuana use and binge drinking, though less closely associated with sleep deprivation. I’m not saying that a day of social media use is worse for girls than a day of binge drinking. I’m simply saying that if we’re going to play the game of looking through lists of correlations, the proper comparison is not potatoes and eyeglasses; it is marijuana use and binge drinking.

Having looked at these studies and sounded the alarm for years now, Haidt then asks, “What would it take to show that social media use was causing teen girls to become depressed and anxious?”

But that is absolutely the wrong question.

Among the normal population of people, only the most willfully blind, idiotic, and lazy (which is to say, most of us) don’t “know” this to be true of social media.

There is one giant, obvious, international and gendered cause: social media. Instagram was founded in 2010. The iPhone 4 was released then too – the first smartphone with a front-facing camera. In 2012 Facebook bought Instagram, and that’s the year that its user base exploded. By 2015, it was becoming normal for 12-year-old girls to spend hours each day taking selfies, editing selfies and posting them for friends, enemies and strangers to comment on, while also spending hours each day scrolling through photos of other girls and wealthy female celebrities with (seemingly) superior bodies and lives. The hours girls spent each day on Instagram were taken from sleep, exercise, and time with friends and family. The arrival of smartphones rewired social life for an entire generation. What did we think would happen to them?

The question is not “How do you get the studies to show it?” or “How do you get people to believe the studies?” The question is, so what? So-the-fuck what? How do you get people to JUST STOP USING IT?

I hear often (and rightly, and encouragingly) about how the sociopaths running the giant social media and tech companies simply do not care about any of this. And they definitely, definitively don’t. But do you know you else doesn’t care? The fucking average human being.

That seems like a bigger problem to me.

brilliant, powerful clowns

Catching up on Peggy Noonan since resubscribing to the Wall Street Journal, and, well, this hits the nail on the head:

Google is another major developer of AI. It has been accused of monopolistic practices, attempting to keep secret its accidental exposure of user data, actions to avoid scrutiny of how it handles public information, and re-engineering and interfering with its own search results in response to political and financial pressure from interest groups, businesses and governments. Also of misleading publishers and advertisers about the pricing and processes of its ad auctions, and spying on its workers who were organizing employee protests.

These are the people we want in charge of rigorous and meticulous governance of a technology that could upend civilization?

At the dawn of the internet most people didn’t know what it was, but its inventors explained it. It would connect the world literally—intellectually, emotionally, spiritually—leading to greater wisdom and understanding through deeper communication.

No one saw its shadow self. But there was and is a shadow self. And much of it seems to have been connected to the Silicon Valley titans’ strongly felt need to be the richest, most celebrated and powerful human beings in the history of the world. They were, as a group, more or less figures of the left, not the right, and that will and always has had an impact on their decisions. 

I am sure that as individuals they have their own private ethical commitments, their own faiths perhaps. Surely as human beings they have consciences, but consciences have to be formed by something, shaped and made mature. It’s never been clear to me from their actions what shaped theirs. I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.

AI will be as benign or malignant as its creators. That alone should throw a fright—“Out of the crooked timber of humanity no straight thing was ever made”—but especially that crooked timber.

She goes on in a follow-up article:

I will be rude here and say that in the past 30 years we have not only come to understand the internet’s and high tech’s steep and brutal downsides—political polarization for profit, the knowing encouragement of internet addiction, the destruction of childhood, a nation that has grown shallower and less able to think—we have come to understand the visionaries who created it all, and those who now govern AI, are only arguably admirable or impressive.

You can’t have spent 30 years reading about them, listening to them, watching their interviews and not understand they’re half mad.

Now, I don’t find that rude in the slightest. It’s flatly true and, if anything, she’s being too nice. As Noonan points at at the start of the first article above, defenders of AI, and of Big Tech in general, are either stupid, preening, or greedy. I’m sure there is a less malicious, more quietly naive category for which “stupid” is too harsh a word, but still conceding its accuracy, I think those three categories pretty much cover it.

Granting all that, and completely agreeing: all of it only matters because the vast majority of us are too busy worrying about our own minute happiness to give a shit.

None of it should really be that surprising. Long before social media or Big Tech took over just about everything, we had Black Friday. In 2021, 180 million shoppers spent $54 billion on Black Friday. As Talbot Brewer recently pointed out:

Just consider what we now do when we set aside our work and gather to express the values that bind us together. Here in the United States, we cut short our Thanksgiving retreat and join long queues in front of big-box stores so we can elbow each other over Black Friday markdowns. And we do this, as often as not, to amass consumer goods for gift exchange at the next family gathering, which (at least nominally) celebrates the birth of a man who counseled his followers to sell all of their possessions and give the proceeds to the poor.

It’s all so illogical and grandly stupid — and universal. What sort of bootstrap-pulling, courageous and virtuous gusto are we expecting from ourselves when it comes to any of this tech business?

The guilt of it all is spread so far and wide that I often have a hard time blaming anyone. But more often, I have a harder time not blaming everyone.

true (in)dependence and ever-widening responsibilities

David Dark:

To the extent that we aspire to bear witness to beloved community, our hopes for America, its citizenry, and the rest of the world won’t be dictated by any government or political party. Beloved community is a call to embody a more comprehensive patriotism wherever we find ourselves. Like discipleship, the practice of democracy is a widening of our capacities for moral awareness and an expansion of our sphere of respect. If we have a steadily narrowing vision of people to whom we’re willing to accord respect or if the company we keep is slowly diminishing to include only the folks who’ve learned to pretend to agree with us, we can be assured that we’re in danger of developing around ourselves a kind of death cult, a frightened, trigger-happy defensiveness that is neither godly nor, in any righteous sense, American. Or as one of Ursula K. Le Guin’s characters famously asked, “What is love of one’s country; is it hate of one’s uncountry?” Might beloved community come to serve as a norm, the core ethic of what we mean when we speak of America as a hope?

In such dreams begin deep responsibilities. Beloved community is an enlarged sense of neighborliness that strives to maintain “neighbor” as an ever-widening category, even when the neighbor appears before us as a threat or an enemy. The injunction to love the neighbor in the minute particulars of speech and action has never been an easy one, but it might be the nearest and most immediate form of patriotism available to any of us. It is also the one vocation that, if neglected, will lead to the forfeiting of any and all soul.

. . . Thoreau argued that our responsibilities to ourselves and others include all the ways we’re complicit in unrighteous norms. We aren’t responsible only for our own ideas; we’re responsible for the conflicts we avoid to more effectively get by, the lies we allow others to voice and propagate unchallenged in our presence. He worried over all the ways he played along and didn’t raise a fuss in the face of the terrors his government enacted and aided and the subtle fashion in which his own behavior, by proving polite and acceptable, abided injustice: “The greater part of what my neighbors call good I believe in my soul to be bad, and if I repent of anything, it is very likely to be my good behavior. What demon possessed me that I behaved so well?”

Overcoming the impulse to play long along to get along (the drive to cave to deferential fear) is what Congressman John Lewis refers to as “good trouble,” that disruptive social newness we undertake when we recall that we need never resign our consciences to legislators, law enforcement officers, or those who accrue cred and coin through creating, pushing, and perpetuating disinformation. If we aren’t agitated by the abuse being carried out with our presumed consent, we aren’t paying attention. but the question is always this: What do we do with our agitation? 

Happy 4th, everyone.

(don’t) Google it

Screenshot of Google Street View that Meghan found while we were looking at a house last week

Antón Barba-Kay:

The fact that Google’s parent company is called Alphabet or that Amazon’s logo is the letter A linked by a smile to Z suggests their awareness of the territory at stake: our life in letters, our presence of mind, our spirit coded and rendered into data. “We’re creating God,” as Mo Gawdat, former chief business officer of a Google AI research arm, modestly put it.


I’ve had a bit of an anti-Google impulse lately. Nothing crazy, I don’t think, and certainly not anything I’d call a boycott. (I rarely boycott anything. And I can’t help but laugh when I hear things like “Boycott Chic Filet” or “Boycott Target.” Since I find myself in a Target about two times a year, and at a Chic Filet approximately once every five years, the thought of me boycotting them is self-evidently hilarious.) It’s more of a “just do the right thing” kick. Or, since that notion naturally sounds like something we should all be doing all the time (“That is what we’re doing, right everybody? Right?”) it’s just my conscious effort to extend that notion to an area that I (and, I’m pretty sure, most of us) rarely think about.

I know that I will not personally make a dent in the monopoly or the corruption or the overconfident manipulation or the data theft of a company like Google, and I’m okay with that. But I can absolutely do without it! (And so can you.) In fact, it’s been fun so far figuring out how to, like our friend above in the Honda Element, give Google the middle finger.

Chrome is gone, on the phone and the computer. I’m trying Firefox for now, and I’m perfectly happy thus far. The only things that might count as “loss” in the process could be described as simple reflexes: I’m having to change the way I reflexively use apps and browsers. I’m more or less choosing to see that as an opportunity to relearn, and therefore re-appreciate, what I do with these devices. But it’s amazing how much these reflexes alone dictate what I/we will do with or do without, what changes we are willing to make. (Spoiler alert: the folks at Google know this about you and they know it better than you do.)

For email, I have an ancient Yahoo! account that I’ve used for most of my adult life, and I have a couple of Gmail accounts. I have not yet figured out how I will get away from this. WordPress, for instance, offers a custom domain email (such as, suchandsuch@tinyroofnail.com), but it seems like they use Google to do this anyway. Proton Mail might be a good solution, but I’m still looking it. But, like I mentioned before, I’m not boycotting Google. If there are some areas that Google works well for, I’m fine with that. Gmail might just be one of them; I don’t know yet (but it probably isn’t). Google Maps is certainly not. Neither is Google Search.

Instead of Google Maps, I’ve been using an app called Pocket Earth. While you can use it to navigate much like Google and Apple Maps, I have found it much more enjoyable to use like you would a physical map. And you can download regions by state to use offline. As for search engines, DuckDuckGo does everything I need a search engine to do.

The fact is, there are plenty of alternatives to a company like Google. And barely 10 years ago any one of them would have seemed unbelievably fascinating and helpful and convenient. Yet Google dominates almost everywhere. It doesn’t have to be that way. With devices that easily occupy 25% of our conscious life—not to mention how much of our unconscious life—why should choosing which apps I use be any less important than choosing which store I buy from, which company I shore up, which candidate I support?

On a similar note, while I have not done it yet, I think I’ve decided that, in the very near future, it will be worth the 30 or 40 dollars per month for an extra cellphone line, to have a second “dumb phone” to leave the house with when I so choose. I admit, I rely too much on the smart phone for regular, weekly tasks to get rid of it completely. So for now, I’m planning to take a baby step, to pay for a certain kind of freedom — that is, a certain kind of openness — that I think is very much worth paying for.