Convex Function

it means at least four different things all at once

Catastrophizing at the Scale of Big Data

[content note: not about distributed computing, actually about the Lovecraftian horrors lurking in every crack and corner of our reality]

I.

A common manifestation of anxiety, depression, and neuroticism in general is catastrophizing, where very bad but relatively low-probability outcomes are prominent when simulating the future–think people with social anxiety who imagine people disliking them forever if they say one wrong thing when they consider going to a social event, or depressed people who know they can never function well enough to do anything interesting with their life or have actual friends or attachments (happy Valentine’s Day, by the way). These outcomes aren’t impossible, in fact there are cases where they’re basically right, but the huge majority of people who have these kinds of thoughts aren’t served well by them.

Depressive realism is a thing; it’s complicated and controversial, but still has some evidence going for it. I think one of the main systematic biases preventing depressive realism from being realistic is low resolution–they catastrophize about how awful their own lives are, or they catastrophize in extremely generic terms about how awful “society” is (usually implicitly for them). Either way, it’s pretty self-focused, and almost always easy to summarize without too much loss of information in a sentence or two. The word “narcissistic”, in the “universal character flaw” sense rather than the “Official Mental Disorder” sense, might be appropriate.

What can we do to fix this?

II.

Many common objects of sacralization are tied to one of the horrors of our world. Sometimes this is foregrounded and universally understood, sometimes the object is held in implicit opposition to one of these horrors, and sometimes it’s very subtle and you have to confirmation bias yourself into seeing it–but much more often than not, it’s there. (Not to say that every slice of horror is well-represented in some common belief system, oh goodness no. We’ll get there.)

Jesus’ self-sacrifice as instance of and metaphor for the various infinite debts we find ourselves in, Jihad as acceptance of the unconquerable primacy of power, the dread of meaninglessness that Modern Agglomerative Spiritualism builds its aesthetic on. Mu is, if you squint, its own unitary sacred object and a lampshading of the pattern I’m describing here.

Environmentalism, anxiety over social change, and concern about too much/too little government regulation in its many forms are (very limited) acknowledgement of the fragility of the conditions that support our lifestyles. Social justice movements are pointing out that life outcomes are horribly unfair, often as a result of social/identity factors that appear to be more-or-less solvable given specific narrative frameworks (which doesn’t mean they’re actually unsolvable!).

Sacralized sports figures and events might seem to break the pattern, but seen as a desperate replacement for more meaningful lost ritual, maybe not.

III.

The healthcare system doesn’t exist. For the time constrained, and at the risk of trying to summarize any of Michael Vassar’s codgery rants, basically: the network of institutions that we assign the intention “make us healthy” is clearly not even attempting to optimize for that.

Robin Hason writes about this a lot (or, he used to). Expensive licensed doctors do not improve health outcomes over clinicians. Cancer screening doesn’t reduce all-cause mortality. Medical spending as it exists now, in general, does not make us healthier. And so on.

IV.

Remember how most published research findings, at least in the less directly mathematical/physical fields, are false? You can be glad that we probably eventually figure out which ones are false, I suppose; or you can be filled with dread because there’s no indication that the speed at which we figure out what’s true is fast enough to prevent terrible, terrible things from happening, and because so many people will suffer and die needlessly in the meantime.

We have statistical approaches that we know are better than “construct a strawman null hypothesis and demonstrate P(data | strawman-null-hypothesis) is low even though what we actually care about is P(specific-alternative-hypothesis | data)”, but nobody in academia has an incentive to unilaterally start using them.

V.

One hypothesis for why the suicide rate is so high in the Mountain Time Zone is lower oxygen concentration.

Before salt iodization, many people were deficient in iodine (many in the developing world still are). Iodine deficiency can lower IQ by ~10 to 15 points.

There’s probably a link between lead exposure and violent crime. Also, a link between a lack of lithium exposure and violent crime.

The animals are getting fatter, too.

What would our lives and society be like if were putting exactly the right amounts of exactly the right things in our bodies? Would we even have social problems anymore? Too bad we’re not going to know what “exactly the right amounts of exactly the right things” looks like for probably thousands of years, at the current rate of medical science.

VI.

There are so many cognitive biases. We have every reason to believe that we haven’t discovered all of them, and falsely discovered some that don’t exist, and misinterpreted some others in counterproductive ways.

Not that just knowing about them is enough, or even necessarily good.

Not that many people care to try to understand them in the first place.

VII.

System 1 is what most of your mind is made of. Urges, aversions, emotions, gut feelings, practiced skills, social interactions. It’s made of lots of disparate parts, a pile of hacks that have or had their use to Azathoth.

System 2 is what we usually think of as “ourselves”, the stream of consciousness, symbolic reasoning. One of its main functions seems to be in coming up with narratives for all the stuff that happens in the external world and in System 1.

Imagine some alien device whispering stories into your ear to explain everything you see, with basically no regard for things like “actual causality” or “justifiable confidence” or “your explicit goals”, instead optimizing mostly for maintaining its own skewed and limited conception of your social status. It’s like that, except that alien device is your consciousness! Fun, right?

VIII.

Stockholm syndrome is what we call it when hostages defend, identify with, and have positive feelings for their captors.

Here are some popular quotes about death.

IX.

The anthropic principle is the idea that observers can necessarily only exist in conditions that support their existence. Usually this is brought up in discussions of cosmology or abiogenesis, but the same concept certainly applies to domains like natural selection, epidemiology, memetics, sociology, economics, governance…

In other words, since we’d stop existing and thinking about our continued existence if we stopped existing, our continued existence is absolutely zero evidence for how secure our lives and civilization are. Not just a little evidence–actual zero. We can learn nothing about how good our strategy for “don’t destroy everything” is from observing our own everything.

X.

The Fermi Paradox is the observation that we’re conspicuously alone in the universe. There’s been so many billions of years for any alien civilization, anywhere in our entire light cone, to start colonizing other planets, shooting messages into space like we love to do, doing anything at all that we’d be able to observe. Instead, we see nothing. Maybe they’re all hiding for some very good reason, or maybe intelligent life is unfathomably rare and we’re unfathomably lucky, or maybe something is killing everyone and we’re next. Nobody really knows!

XI.

One hundred trillion people are dying every second. Not literally, of course, but there are possible futures where humans colonize every star in our light cone, and set up new colonies around every one of those stars, and the people in these colonies are happy and glad they exist. The longer we take to get to that point, the more energy those stars lose, and the fewer stars we’ll ever be able to reach due to cosmological inflation.

We could have turned that energy into people. There’s a lot of energy out there.

And, of course, if we never make it to that point at all, then none of these people will ever exist.

XII.

Among people who have thoughts more complicated than “Concept X has (negative|positive) valence forever and always in all situations”, and who aren’t already attached to narratives that completely prevent them from understanding in principle, I think the most common obstacle to grokking the unfriendly AI problem is that it suggests a fundamentally unsafe universe.

Technological change can cause some problems, sure–social atomization and inequality and stuff–but surely it could never just destroy literally all the value forever without warning in some unintentional and incomprehensible way that can only be prevented by knowledge that isn’t exactly the same as the minimal knowledge needed to create AI systems in general and that might turn out to be much, much harder to obtain. That would be too unfair.

Remember in (VII) when we established that symbolic thoughts exist solely to understand the world as it is, and that people never unintentionally create comforting or subtly self-serving narratives, particularly in response to perceived threats to their own status or security? Yeah.

XIII.

Epistemic and instrumental rationality are sometimes in tension. Seeing things for what they are is not guaranteed to make you more effective. Unwarranted self-confidence is usually considered to be instrumentally useful, which is why depressive realism often doesn’t lead depressed people to live successful lives; likewise, intuitively understanding that we live in a fundamentally unsafe universe will not necessarily help you make the universe any safer.

I didn’t write this to make you sad, so please don’t do that. Send me a message at “james @ thisdomain” if you’d like to talk about this with me.

XIV.

I’m far from the first person to try to perform the Reduce step of the Forbidden MapReduce Job. See:

If you don’t understand yet, maybe one of them will succeed.

unlike us, these adorable kittens are safe