Convex Function

it means at least four different things all at once

Catastrophizing at the Scale of Big Data

[content note: not about distributed computing, actually about the Lovecraftian horrors lurking in every crack and corner of our reality]

I.

A common manifestation of anxiety, depression, and neuroticism in general is catastrophizing, where very bad but relatively low-probability outcomes are prominent when simulating the future–think people with social anxiety who imagine people disliking them forever if they say one wrong thing when they consider going to a social event, or depressed people who know they can never function well enough to do anything interesting with their life or have actual friends or attachments (happy Valentine’s Day, by the way). These outcomes aren’t impossible, in fact there are cases where they’re basically right, but the huge majority of people who have these kinds of thoughts aren’t served well by them.

Depressive realism is a thing; it’s complicated and controversial, but still has some evidence going for it. I think one of the main systematic biases preventing depressive realism from being realistic is low resolution–they catastrophize about how awful their own lives are, or they catastrophize in extremely generic terms about how awful “society” is (usually implicitly for them). Either way, it’s pretty self-focused, and almost always easy to summarize without too much loss of information in a sentence or two. The word “narcissistic”, in the “universal character flaw” sense rather than the “Official Mental Disorder” sense, might be appropriate.

What can we do to fix this?

II.

Many common objects of sacralization are tied to one of the horrors of our world. Sometimes this is foregrounded and universally understood, sometimes the object is held in implicit opposition to one of these horrors, and sometimes it’s very subtle and you have to confirmation bias yourself into seeing it–but much more often than not, it’s there. (Not to say that every slice of horror is well-represented in some common belief system, oh goodness no. We’ll get there.)

Jesus’ self-sacrifice as instance of and metaphor for the various infinite debts we find ourselves in, Jihad as acceptance of the unconquerable primacy of power, the dread of meaninglessness that Modern Agglomerative Spiritualism builds its aesthetic on. Mu is, if you squint, its own unitary sacred object and a lampshading of the pattern I’m describing here.

Environmentalism, anxiety over social change, and concern about too much/too little government regulation in its many forms are (very limited) acknowledgement of the fragility of the conditions that support our lifestyles. Social justice movements are pointing out that life outcomes are horribly unfair, often as a result of social/identity factors that appear to be more-or-less solvable given specific narrative frameworks (which doesn’t mean they’re actually unsolvable!).

Sacralized sports figures and events might seem to break the pattern, but seen as a desperate replacement for more meaningful lost ritual, maybe not.

III.

The healthcare system doesn’t exist. For the time constrained, and at the risk of trying to summarize any of Michael Vassar’s codgery rants, basically: the network of institutions that we assign the intention “make us healthy” is clearly not even attempting to optimize for that.

Robin Hason writes about this a lot (or, he used to). Expensive licensed doctors do not improve health outcomes over clinicians. Cancer screening doesn’t reduce all-cause mortality. Medical spending as it exists now, in general, does not make us healthier. And so on.

IV.

Remember how most published research findings, at least in the less directly mathematical/physical fields, are false? You can be glad that we probably eventually figure out which ones are false, I suppose; or you can be filled with dread because there’s no indication that the speed at which we figure out what’s true is fast enough to prevent terrible, terrible things from happening, and because so many people will suffer and die needlessly in the meantime.

We have statistical approaches that we know are better than “construct a strawman null hypothesis and demonstrate P(data | strawman-null-hypothesis) is low even though what we actually care about is P(specific-alternative-hypothesis | data)”, but nobody in academia has an incentive to unilaterally start using them.

V.

One hypothesis for why the suicide rate is so high in the Mountain Time Zone is lower oxygen concentration.

Before salt iodization, many people were deficient in iodine (many in the developing world still are). Iodine deficiency can lower IQ by ~10 to 15 points.

There’s probably a link between lead exposure and violent crime. Also, a link between a lack of lithium exposure and violent crime.

The animals are getting fatter, too.

What would our lives and society be like if were putting exactly the right amounts of exactly the right things in our bodies? Would we even have social problems anymore? Too bad we’re not going to know what “exactly the right amounts of exactly the right things” looks like for probably thousands of years, at the current rate of medical science.

VI.

There are so many cognitive biases. We have every reason to believe that we haven’t discovered all of them, and falsely discovered some that don’t exist, and misinterpreted some others in counterproductive ways.

Not that just knowing about them is enough, or even necessarily good.

Not that many people care to try to understand them in the first place.

VII.

System 1 is what most of your mind is made of. Urges, aversions, emotions, gut feelings, practiced skills, social interactions. It’s made of lots of disparate parts, a pile of hacks that have or had their use to Azathoth.

System 2 is what we usually think of as “ourselves”, the stream of consciousness, symbolic reasoning. One of its main functions seems to be in coming up with narratives for all the stuff that happens in the external world and in System 1.

Imagine some alien device whispering stories into your ear to explain everything you see, with basically no regard for things like “actual causality” or “justifiable confidence” or “your explicit goals”, instead optimizing mostly for maintaining its own skewed and limited conception of your social status. It’s like that, except that alien device is your consciousness! Fun, right?

VIII.

Stockholm syndrome is what we call it when hostages defend, identify with, and have positive feelings for their captors.

Here are some popular quotes about death.

IX.

The anthropic principle is the idea that observers can necessarily only exist in conditions that support their existence. Usually this is brought up in discussions of cosmology or abiogenesis, but the same concept certainly applies to domains like natural selection, epidemiology, memetics, sociology, economics, governance…

In other words, since we’d stop existing and thinking about our continued existence if we stopped existing, our continued existence is absolutely zero evidence for how secure our lives and civilization are. Not just a little evidence–actual zero. We can learn nothing about how good our strategy for “don’t destroy everything” is from observing our own everything.

X.

The Fermi Paradox is the observation that we’re conspicuously alone in the universe. There’s been so many billions of years for any alien civilization, anywhere in our entire light cone, to start colonizing other planets, shooting messages into space like we love to do, doing anything at all that we’d be able to observe. Instead, we see nothing. Maybe they’re all hiding for some very good reason, or maybe intelligent life is unfathomably rare and we’re unfathomably lucky, or maybe something is killing everyone and we’re next. Nobody really knows!

XI.

One hundred trillion people are dying every second. Not literally, of course, but there are possible futures where humans colonize every star in our light cone, and set up new colonies around every one of those stars, and the people in these colonies are happy and glad they exist. The longer we take to get to that point, the more energy those stars lose, and the fewer stars we’ll ever be able to reach due to cosmological inflation.

We could have turned that energy into people. There’s a lot of energy out there.

And, of course, if we never make it to that point at all, then none of these people will ever exist.

XII.

Among people who have thoughts more complicated than “Concept X has (negative|positive) valence forever and always in all situations”, and who aren’t already attached to narratives that completely prevent them from understanding in principle, I think the most common obstacle to grokking the unfriendly AI problem is that it suggests a fundamentally unsafe universe.

Technological change can cause some problems, sure–social atomization and inequality and stuff–but surely it could never just destroy literally all the value forever without warning in some unintentional and incomprehensible way that can only be prevented by knowledge that isn’t exactly the same as the minimal knowledge needed to create AI systems in general and that might turn out to be much, much harder to obtain. That would be too unfair.

Remember in (VII) when we established that symbolic thoughts exist solely to understand the world as it is, and that people never unintentionally create comforting or subtly self-serving narratives, particularly in response to perceived threats to their own status or security? Yeah.

XIII.

Epistemic and instrumental rationality are sometimes in tension. Seeing things for what they are is not guaranteed to make you more effective. Unwarranted self-confidence is usually considered to be instrumentally useful, which is why depressive realism often doesn’t lead depressed people to live successful lives; likewise, intuitively understanding that we live in a fundamentally unsafe universe will not necessarily help you make the universe any safer.

I didn’t write this to make you sad, so please don’t do that. Send me a message at “james @ thisdomain” if you’d like to talk about this with me.

XIV.

I’m far from the first person to try to perform the Reduce step of the Forbidden MapReduce Job. See:

If you don’t understand yet, maybe one of them will succeed.

unlike us, these adorable kittens are safe

Landfill Mining as a Fully General Rationalization for Not Recycling

I.

I think when most people throw things away (including me), they implicitly believe it just kind of stops existing, even most people who have warm fuzzies/invested identities in environmentalism. They explicitly believe it gets hauled off to a landfill, of course; but there’s nothing to gain from this rising to conscious attention when you actually throw things away (besides guilt if you’re into that), so the enacted belief appears to be that garbage cans are black holes.

If you have the implicit belief that things stop existing when they’re thrown away, you might be expected to have a related implicit belief that throwing away things you don’t really have to throw away is very bad. How could you just obliterate finite resources? Don’t you care about the children?! And indeed, people who throw away their paper and plastic might be seen as thoughtless. There’s a lot of social and institutional pressure to recycle.

Lessons of my childhood: if you don’t recycle then you’re obese and unattractive.
(These traits are how you identify the Bad People)

So what actually happens? Well, your garbage is hauled off to a landfill, where it sits… and sits… and sits. Until somebody digs it back up because it’s valuable to them.

We could take this used resource and reclaim it now, substituting for extracting more of the raw resource today, or we could put it in the ground until the mine site we’re creating becomes valuable enough for the future to reclaim it, substituting for extracting more of the raw resource in the future. After the cost curves for “mine from ore” and “mine from landfill” intersect and the landfill reclamation happens, about the same amount of the resource will have ultimately been extracted from the Earth either way. So, how do we tell when one is better than the other?

I think it’s currently cheaper to recycle aluminum than it is to dispose of it and extract more from the Earth, even without subsidy. (Hidden behind the word “cheaper” is all the land, energy, and human effort that goes into recycling/disposal). I don’t think this is true for most other consumer recyclables yet; for some regulatory environments and for some waste types I bet it is, but I’d naively expect to see more private efforts to get people to recycle if it were always economically efficient. Older landfills had land and water pollution problems, but modern landfills are not particularly environmentally dangerous besides releasing methane (which is often recovered and used), and they take up very little land compared to, say, agriculture; you’d probably preserve much more pristine Earth by buying non-organic vegan food than anything relating to how you deal with your garbage.

So, you heard it here first–throw everything in the trash! The future thanks you for concentrating all these eventually-valuable resources in one place, and your local economist death cult thanks you for taking price signals seriously.

II.

This is not a blog post about why recycling isn’t important. Cute half-informed arguments probably shouldn’t be more convincing than conventional wisdom, so do whatever makes you happy. This is a blog post about purity norms.

There’s this stereotype that people in the Blue Tribe are mostly morally concerned with harm/care and fairness/reciprocity, while people in the Red Tribe care more about loyalty to the ingroup, respect for authority, and sanctity/purity. There was some research published several years ago that apparently confirmed this, and now its status as Science Fact has diffused into not-quite-common-but-not-uncommon knowledge.

Do you think it’s suspicious that some social scientists found Blue Tribe is so psychically different from Red Tribe? Particularly when most other social science comes to the conclusion that everyone is largely the same–this sentiment is right in the mission statement of many sociology departments. Perhaps this research is more about confirming preexisting beliefs than discovering what reality looks like! [*gasp*] [*shock*] (There’s been some other research that found this isn’t really right, but of course that’s fighting against a preexisting widely-held belief and an authoritative thing that seems to confirm it, so it hasn’t diffused nearly as well.)

(I’m focusing on the Blue Tribe here because I expect 90% of people who read this will be in Blue Tribe, with the last 10% in Grey Tribe, and if I focused on Red Tribe you’d just say “Yeah, those outgroup members are ignorant and wrong! Yay ingroup!”, which doesn’t teach anyone anything. For the most part, yes, this stuff is basically symmetrical. I do want to make an honest attempt to see Grey Tribe sacrality for what it is too, my own cherished ingroup, but that’ll be its own thing.)

Things like eating organic food and environmentally-motivated acts often (not always!) serve as expressions of purity, in that they’re small sacrifices of ease, pleasure, or money that allow you to self-signal and other-signal as someone who’s concerned about clean, wholesome things that support whatever you think of as the natural order. Trying to make sense of noble, priceless environmentalism with mundane, tasteless money, like I did above, might have pushed a button especially hard; the pure is contaminated by contact with the impure, as a general rule. None of this is wrong, necessarily–it would be way too convenient if you could just reverse all your purity norms and quickly arrive at the “correct” ones–but it’s pretty clear what psychological role they fill for a lot of people; the same one “real” religious practices fill for most others.

They’re practically begging you to notice

This is not intrinsically bad! I have my own pseudo-religion too, of course; I’m sure our minds are shaped this way for good reasons, many of which probably still apply in the modern world, like bonding and solidarity and all that junk. Unlike a lot of people who say things like “[not-technically-a-religion] is a religion”, I’m not suggesting you should stop believing and doing these things, or that you’re necessarily incapable of thinking effectively about anything remotely related, or that you believe and do these things solely to signal affiliation with the ingroup (even if that is a safe yet incomplete explanation for pretty much all human behavior). For the most part, though, just being the sacralizing impulse instead of being reflectively aware of it doesn’t seem like the best strategy for forming accurate beliefs or being effective.

III.

This is not a blog post about purity norms. I like reading about the mechanics of psychology and culture as much as the next nerd, but you can find people trying to see purity norms for what they are in lots of other places. This is a blog post about training yourself to notice motivated reasoning in general.

A few minutes ago, when I was suggesting that recycling may not be particularly important in some sense; did it make you feel uncomfortable or annoyed? Maybe you started looking for explanations for why it’s wrong, and maybe you quickly both found them, and found them convincing. This is what it feels like to perform motivated reasoning–the train of thought at the beginning is hardly perfectly convincing, and could easily crumble under the right data (trains crumble, right?), but I don’t think it’s obviously definitely wrong to the point where any non-expert could reasonably reject it in a matter of seconds, either.

If you’re interested in at least being aware of when you perform motivated reasoning, and you have some attachment to being environmentally virtuous, then you can use my cute little argument that recycling is unimportant as a training example for what the “defend against people questioning my purity norms at all costs” cognitive algorithm feels like from the inside. Maybe go back to it and focus on that feeling–it has a particular hue other kinds of discomfort and annoyance don’t. That’s one way among many to help you notice the possibility that you’ve already decided what’s true, that one part of your mind has turned off another part of your mind that a third part of your mind might prefer to leave on, and whatever you get as output will probably be uncorrelated (not anticorrelated) with reality.

The broader method that this is an instance of does actually work for people. It might be even easier to first practice on more tangible implementation intentions than [experience a certain feeling] -> [look out for an accompanying mental motion], but designing and carrying out implementation intentions that use thoughts and feelings as their inputs and outputs is something that I (and some other people) think can make you much better at thinking over time when turned into a habit.

Cognitive Behavioral Prosthetics

I.

What does technology do?

It’s a sort of strange question, I know–maybe almost a type error–but the first thing that comes to mind if you take it at face value can tell you a lot. Whatever your answer looks like, it’s couched in a narrative: you take examples from your own life and what you’ve been taught and the stories of your tribe, and you map the symbol to a small list of concepts and an emotional valence, anchored in this time and place and the rest of how you see the world.

If you were to ask a random person what “what technology does” (and nowadays usually the word “technology” is automatically translated to “computers and electronics”, for better or worse), I imagine the most common answers would look something like:

  • “Calculations” (including e.g. simulations/games and measuring/statistics/inferring things),
  • Organizing and delivering information/communication/media,
  • Creating social spaces and marketplaces,

and… well, that basically covers it, unless they went into more concrete examples or chose something else from the long tail of responses. And, as a corollary, most inventions and systems that see broad adoption have to fit into at least one of these three buckets (see: every software company). Otherwise, you’re fighting to get people to see the world differently while simultaneously trying to persuade them your thing is worth using, and nobody’s got time for that.

Each of these things has been around in their current form for a long time, on high-tech timescales; even that last one has been common for almost two decades now. I’m not saying that’s an exhaustive or even particularly good list of “what technology does”, only that they’re the kinds of things I think most people would answer that question with, maybe even most software engineers. (Engineers who work with particles instead of bits might take a slightly broader view.)

Bret Victor gave this talk, “The Future of Programming”, that conveys a kind-of-related idea though focusing specifically on programming. It’s a great talk and I’d recommend it to anyone, programmer or not, but for the time-constrained: when practical digital computers were still very new (‘60s and early ’70s), nobody actually knew what they were doing. Without preexisting categories and practices as a crutch, people were forced to be creative, to see with fresh eyes–and there was a brief explosion of ambitious-by-modern-standards ideas in how to work with these incredible devices.

But, a culture quickly developed and people learned “what programming looks like”, and now most new things in programming are evolutionary tweaks and combinations of a small list of practices that would’ve been considered unoriginal decades ago–he uses the word “dogma” at least once. There are other factors that go into explaining this story (e.g. coming up with new ideas in a reference class isn’t hard when the reference class contains no known ideas), but I think he has a point.

If you’re willing to believe this path-dependent categorical coagulation happened with “how to make computers do whatever they do”, how likely is it that the same thing didn’t happen with “what computers look like”, or “what computers are for in the first place”, or even “what the results of human design in general can do”?

We’ve done it. We’ve discovered the Final Platonic Forms Of Consumer Technology.
No more changes needed or wise.

II.

I want to offer you another answer to “what technology does”. It’s not at all original, but it does seem to be less common (or less commonly talked about) than I’d like, and I want to flesh it out a bit in a certain direction.

To summarize that second link: technology extends the capabilities of our minds and bodies. This is what every material and informational artifact is ultimately about, even things we might call “art”. The word “prosthetic” is usually defined relative to a standard of normal function, but if you’re willing to temporarily suspend your culturally-situated standards of what normal human function looks like then I think the word captures the idea extremely well.

The stuff in your toolbox, the car or train you take to work, and the roof that’s probably over your head (along with “actual prosthetics” for those missing parts of their original body) are material prosthetics, giving you the choice to be more physically capable and comfortable. Likewise, language and math and computers are examples of mental prosthetics that let you think more clearly, more deeply, and more broadly.

Social prosthetics allow us to better accomplish interpersonal goals (whether shared or not); named examples that are more obviously technologies would be email or your preferred social network, but mores, folkways, interaction rituals, laws, intentional methods that assist in anything broadly defined as “manipulation” (in the denotative sense of “altering mental states” without implied value judgement), and shared symbols and non-universal human culture in general belong in this category. Seeing social technology for what it is when you use it yourself can be difficult, but read an anthropological account of another culture and you’ll notice plenty of examples.

Cognitive behavioral prosthetics, then, are all the different technologies that help you extend or change the mechanics of how you think and act, like the school of therapy with the same name. This cuts across the categories above: for example,

  • an alarm clock or calendar software are material (or, at least, externalized) prosthetics that get you to think or do particular things at the right times, leaving your mind and body free to focus on other things until then;
  • mindfulness practice, or attempts to develop a systematized art of rationality, are mental prosthetics that you can use to try to notice and correct undesirable thoughts and behaviors;
  • social practices can be used to drive behavioral and cognitive changes, for example using the buddy system to develop an exercise habit, or intentionally immersing yourself in a subculture to become more like its members;
  • general-purpose computers themselves and everything that they do really kind of sit at the intersection of all three, and for now are what I’d nominate as the best single example of a “cognitive behavioral prosthetic”.

Searching for the phrase “cognitive prosthetics” turns up mostly 1. sites and papers about assistive devices for the disabled and 2. some articles from transhumanist websites that say something like what I’m saying here, except coated with that delightfully polarizing “Gee Whiz, The Future!” aesthetic. “Cognitive behavioral prosthetics” gives me nothing but two pages with “Viagra” in the preview, which I suppose is an instance of the class but they’re probably not actually about this same topic. “Cognitive behavioral technologies” seems to be the name of a company and product that’s closely related to what I’m talking about, though it’s more of a “prosthetic” in the limited and traditional sense (it’s a therapy aid for depressed people), which seems great but is maybe less ambitious than the name suggests.

I don’t think I can allow myself the satisfaction of coining the term, but I do think it’s an uncommon and fruitful enough way to see things that we’re collectively making a horrible mistake by not intentionally trying to explore the category more.

III.

Systems for externalizing your memory or other cognitive labor, methods for changing habits (branded that way or not), anything that helps you resist maladaptive mental states, improving your ability to notice and be reflexively aware of your mental activity and bodily state, things that help you guide the thoughts and actions of your future selves in particular directions, tools for incentivizing yourself to care more about what you’d prefer to care about, media that expose thoughts previously unthinkable; whatever can bring clarity, ease, and some greater level of control over ourselves, offload tasks that take precious mental resources to our environment, and support us in being more like we wish we were; these are the kinds of things that can potentially bring a lot of different benefits to a lot of different people (just like, for example, computers in general), and are uniquely compounding in a way most other technological change isn’t (like, y’know, computers!).

Cognitive behavioral prosthetics are just a particular kind of technology, in principle capable of growing alongside or (in a dramatically different society) maybe even outpacing the rest of the unstoppable techno-commercialist incentive-thing, letting us gain more of what we want with less of something else, requiring nothing but cultural transmission and a threshold of spare effort and (sometimes) the ability and material wealth to use them. Unlike the “terminal values” they’re used in pursuit of, these technologies exist in an enormous, open-ended design space which we’ve only begun to explore and that’s growing larger all the time.

Specifically, habits of mind and social scripts are as old as humanity; there’s a lot more opportunity to experiment with them than there was 100,000 years ago, and I’m very glad some people take working on those things seriously, but I think the growth of our externalized technology is responsible for the huge majority of the growth in that reachable design space. I also think our collective desire to explore it hasn’t caught up with how much that space has exploded over the past 50 years; there’s the “lifehacking” phenomenon, which in most cases I would describe as “cute” in its ambitions, and a few different products that kind of approach the idea of cognitive behavioral prosthetics but pretty much fail to take it very far.

In recent memory, we have things like Pavlok, which is a glorified shock collar for self-conditioning; the various smartwatches and activity trackers, which don’t do much more than measure your exercise and sleep habits and make a graph of exactly how unhealthy you are over time, for your viewing pleasure; and things like mind mapping software and an endless flood of little “productivity tools”, which have their uses but seem more “gently supportive” than “transformative” in what they can actually accomplish.

(I know, these things require effort and honest engagement from the user, which is often where they fail… the entire point of technology is that it gets us more with less, though. You could tell people that just wearing shoes doesn’t mean walking distant places won’t still take a lot of effort and time, but you could also invent the bicycle.)

Anyway, none of these things are very inspiring in and of themselves. I can’t help but think that, like with programming itself or the design of computing devices, we’ve reached the point where we now “know” what self-improvement technology looks like–it’s silly little wristbands and “________”-tracking software, of course, so if you want to make some material thing or piece of software that people will understand as “makes me better at doing what I want to do in general”, it had better come in the form of a silly little wristband or a “________”-tracking program.

I have some of my own ideas, which I’m very slowly developing in my spare time. I don’t know if trying to create new kinds of broadly-applicable cognitive behavioral prosthetics is the most valuable possible use of your own time–it probably depends on who you are–but if you have any interest in actually helping other people (and yourself) and even a little bit of technical ability and drive to create things… I think not nearly enough people are even really aware that it’s a thing they can try to do, let alone actually trying.

Lifehacking Considered Harmless

[epistemic status: I believe it, but still a bit speculative]

Lifehacking is this thing where people solve small problems in novel ways. (Maybe you’d define it differently, or you have a similar behavioral pattern but don’t use the same word, and that’s okay; I’m referring more to the behavior than the label.) People have always been solving problems, so it might seem strange to have another, relatively new term for the activity. The words small and novel in that definition are doing a lot of work–when we solve problems that aren’t small, it’s usually called something like “innovation” or, you know, “problem-solving”; and if a solution didn’t seem that creative (i.e. it’s similar to something that’s already common knowledge), you’d call it “advice” or a “tip” if somebody reminded you of it, or maybe not label it at all otherwise.

“Lifehack: read books to learn things”. Sounds flippant for some reason

I don’t know if solving small problems in novel ways has actually become so much more common recently that a new word was really needed, but it does seem that way to my young eyes, and I can come up with at least one multi-faceted just-so story to explain why, so here goes. Now that the low-hanging fruit of easy innovations available from where we are now has been picked:

  1. The mental energy that people have to spend on creation and change is more likely to offer trivial returns, including things so trivial they have little chance of memetic reproduction (let alone becoming named products or practices).

  2. As the marginally-useful, niche-focused creative output that does turn into products and practices proliferates, the dimensionality of the lifestyle optimization problem grows, meaning there are relatively far more opportunities for novel and interesting-sounding hill-climbing that’s difficult to generalize across situations or people. (“17 cool uses for your espresso machine!”)

  3. This lowering in expected returns from “productive” creativity trains people to inhibit any grander and riskier innovative tendencies they might’ve had, at least relative to someone in a similar economic situation before the low-hanging fruit disappeared and this huge variety of trivial stuff took its place, leading to more “consumptive” creativity by implication.

(If you’re questioning the difference between consumptive and productive creativity: I think of it as a spectrum with high variance both within and among creative activities, not a rigid dichotomy. If you’re questioning the idea that people can willfully devote more energy to one type of creativity than the other, or willfully be more or less creative in general, given the huge roles that serendipity and the subconscious play: think operant conditioning and cultural narratives rather than homo economicus.)

Whatever the complete story is (and this narrative isn’t that), the end result is that the outputs of your creative effort are likely to be less impactful, more incremental, less generalizable, and serving purposes not particularly economic (e.g. fun, signaling) than they would’ve been otherwise. The lifehacking phenomenon as a manifestation of this pattern makes a lot of sense to me, and I doubt that it’s solely because innovation really is harder at the moment.

This is what modern ingenuity looks like

It’s not that lifehacking is universally bad or a waste of time. Small changes to your own habits and environment that result in a little time saved or a small improvement in mood can add up, and keeping an eye out for easy changes that bring outsized benefits is obviously a good idea in any domain. Being mindful of your habits, in particular, is unambiguously important, and play in general is good, whatever form it takes. My fear is that some people overestimate the value of lifehacking compared to alternative activities that scratch a similar itch.

There’s this phenomenon where people who do a good deed are less likely to do another good deed in the near future. In my experience, this short-term emotional satiation effect applies to feelings besides piety; with “lifehacker” as a part of your identity and something you “do” on a regular basis, you’re selling your probably-limited ability to feel satisfied about having done something creative and/or productive to buy trivial, evolutionary, likely transient improvements to your own life. Meanwhile, that strange little side project you could’ve worked on lies unattended, and the world is robbed of a small chance of seeing something truly new and broadly useful.

If you’re inclined to describe an activity as “lifehacking” instead of “research” or “experimenting” or “learning” or “building” or even regular old “hacking”, it’s most likely harmless–nothing less and nothing more.