Most of us are aware that pronatalism is popular within Silicon Valley. But there's a new kind of antinatalism that's catching on, too! In this article, I give it a name: replacement antinatalism.
Okay antinatalists, pro-extinctionists, you first. I think Jane Goodall had the right idea of putting all these wankers in a rocket and launching them into space. Since they are so fond of dictating what happens to their fellow humans they should have no objection to the rest of us making that choice for them.
LOL!! Yeah, that interview with Goodall was so powerful and poignant. Re: sending the billionaires to Mars, the idea is featured in the theme song to my podcast Dystopia Now, lol! https://wonderfulsome.bandcamp.com/track/hunger-strike
Oh, fascinating! I didn't know this. Yes, please pass along the quotes, if you get a chance. I'd really appreciate that! Thanks for reading, Matrice. :-)
> Arguably, a lot of ideas shouldn’t be argued. Anyone who wants to know them, will. Anyone who needs an argument has chosen not to believe them. I think “don’t have kids if you care about other people” falls under this.
> At one point, I saw a married couple, one of them doing AI alignment research who were planning to have a baby. They agreed that the researcher would also sleep in the room with the crying baby in the middle of the night, not to take any load off the other. Just a signal of some kind. Make things even.
> And I realized that I was no longer able to stand people. Not even rationalists anymore. And I would live the rest of my life completely alone, hiding my reaction to anyone it was useful to interact with. I had given up my ability to see beauty so I could see evil.
> The reindeer quote makes an apt description of how sexual meaning cannibalism projects bondage. (It projects epistemic food poisoning into any intellectual project where your activities are also being used for that purpose.) You can trace political influence, from the Bay Area overlords who control safe stable housing for extortionate regulatory-captured tribute, to the mothers wanting to raise children in that housing, to the rest of the local “Effective Altruist” and “rationalist” communities.
> ||| things like “i am just fundamentally addicted to having babies, so to be altruistic i have to have them or i cant work” or “i need to have sex with young women so i can keep my confidence up and then later be super altruistic”.
> |||| two sides of the same coin, breeding.
> | when i came to the bay area i was really confused why people would say things like “if you think the odds look too grim enjoy the time you have left” at winter solstice. or “smart people having babies is actually a galaxy brain way to save the world, the babies will save the world. you know, the baby army.”
> || in a world in which the tradition and narrative by the majority of humans wasnt such that “have babies” was just a set thing you did. saying you are altruistic and then diverting your life path to spend an intense amount of time and energy and money on one particular human because they are genetically related to you, would seem like it would need justification.
> [...]
> in a world like this, their are attempts at every level to bury what someone running utilitarianism and altruism on their brains looks like. by the majority of humans who want to have something like “im def altruistic i swear i can prove it by all these metrics ppl around me has agreed mean that im working for the good of the world. im also totes addicted to babiez. *peace sign*”
> when i say this sort of stuff people react like im trying to set up a norm where people are socially ostricisized. but like how would that work when social is overwhelmingly made up of people who made this exact choice?
> im talking with humans who are aberrations from those who choose to be what the majority coercively designates to be “human”.
> i dont think everyone who chooses to not have babies has a neurotype such that they wouldnt zap someone in the milgram or be able to die rather than reveal the location of the rebel base. i dont even think its a particularly good indicator. i think both are indications of choosing not to work for the good of the world in a way sufficient for the doom our planet faces.
> the “altruists” having babies thing is actual insane and pasek is right about that.
> [...]
> CFAR and EA will do things like [...] creating a social reality where people with genetic biases who personally devote massive amounts of time and money to babies who happen to be genetically related to them and then in their day job act “altruistically”. as long as it all adds up to net positive, its okay right?
> [...]
> to save the world it doesnt help to castrate yourself and make extra super sure not to have babies. people’s values are already what they are, their choices have already been made. these sort of ad-hoc patches are what wrangling an unaligned agent looks like. and the output of an unaligned agent with a bunch of patches, isnt worth much. would you delegate important tasks to an unaligned AI that was patched up after each time it gave a bad output?
> it does mean that if after they know about the world and what they can do, people still say that they specifically should have babies, i mark them as having a kind of damage and route around them.
> someone not having babies doesnt automatically mark them as someone id pour optimization energy into expecting it to combine towards good ends. the metrics i use are cryptographically secure from being goodharted. so i can talk openly about traits i use to discern between people without worrying about people reading about this and using it to gum up my epistemics.
> imagine a world optimizer for good took control of your body. what would you expect it to do? would you anticipate that it would start expending a lot of time and money creating humans that happen to share their genes and then caring for them?
> if you never heard of “having children” would doing this cluster of thing at all make sense as an instrumental step to taking control of the reins of the world and threading it through a narrow region in possibility-space?
> children are an enormous sink of time and money and change in life path orientation. in a counterfactual world where birthing children werent a Thing You Do By Default, like “eating meat” or “working for google” are is in this world, it would seem really weird and confusing to see someone start doing this in the middle of saving the world.
> if you grew up in a world threatened as this world is threatened and all of the true heros keeping sentient life from being extinguished had no children, would you consider it an improvement to the fate of the world to clip into an au where they all started having babies and raising them and orienting their lives around them? would you press a button to change from the world you knew to that one?
> when i imagine what a true hero would do in a world such as this and ask if they would have kids, i get back the answer “no”.
> not everyone is cut out to be a true hero.
> –
> like if you were someone who actually cared about each and every instantiation of sentient life would you choose for the one and only human you controlled to have babies and raise them? or would that seem like an enormous waste of time and effort?
> if you were reading a book and a hero lived in a world such as this and they were perhaps the worlds only hope, would you be yelling at the page for them to have babies? desperately trying to get them to realize that you need to have and care for genetic offspring or the world will fall? that so much depends on instantiating humans who happen to share some of the DNA of the hero?
> i dont expect that people who have chosen to have children to be able to see how this is absurd behaviour for someone with altruistic values (chooses to optimize for the good of all life) to exhibit.
> like someone said i sounded like i was crazy and homeless and couldnt understand me when i pointed out that reorienting your life, your time, your money, to a human who happens to be genetically related to you for 16 years is altruistic insanity. just do the math. eliezer, anna, michael, brian tomasik all once took heroic responsibility for the world at some point in their lives and could do a simple calculation and make the right choice. none of them have children.
> pretending that peoples “desires” “control them”, when “desires” are part of the boundary of the brain, part of the brains agency and are contingent on what you expect to get out of things. like before stabbing myself with a piece of metal would make me feel nauseated, id see black dots, and feel faint. but after i processed that stabbing myself would cure brain damage and make me more functional, all this disappeared.
> most people who “want” to have children have this desire downstream of a belief that someone else will take heroic responsibility for the world, they dont need to optimize as much. there are other competent people. if they didnt they would feel differently and make different choices.
> i give arguments why having kids is insane for an altruist, that the people who took heroic responsibility for the world at some point in their lives {yudkowsky, salamon, vassar, tomasik} didnt have kids, no one gives good counter-argument.
> [...]
> wrt babies, you said humans arent robots? but yudkowsky, salamon, vassar, tomasik arent robots and they got this right. that its wrong to use altruist in this way because it demotivates people. i disagreed with this in legally blind, i dont misreport category-structure for ev over a community. this is myopic optimization and that way leads ruin. like the ruin of a chatroom of transfems deciding by fiat that everyone is "cute", but you cant actually do economics that way and the value of "cute" plummets and people feel insecure. and that maybe it makes sense if you arent taking heroic responsibility for the world, if there were other people. but that makes you a different kind of agent than say an altruist-platonist that yudkowsky talked about.
> like if someone actually had a plan for FAI that involved this, okay. but rn time is too short imo. when i first heard people were having babies i was confused and assumed they were going to harvest the DNA of the best FAI researchers, someone would decide to grow a baby inside them, someone who discounted their ability otherwise to save the world except via this or thought this was a sacrifice worth making for the world would decide to raise this human.
> the human can access information about the state of the world and make their own choices. wont necessarily become an FAI researcher.
> used to think that intelligence was the main bottleneck on FAI research no longer think this. you could talk with terry tao for hours about the dangers of the wrong singleton coming to power but unless you have made some advances i have not, i wouldnt expect to be able to align him with FAI research. he would continue to put as much resistance to his death and the death of everyone as a pig in human clothing. he would continue to raise his babies and live in a house with someone he married and write about applying ergotic theory to the analysis of the distribution of primes and understanding weather patterns.
> similarly, i dont think culture is a sufficient patch for this. think its a neurotype-level problem where a bunch of >160 iq humans hear about the dangers of UFAI and then continue to zoom quickly and spiral in to being ultra efficient at living domestic lives and maybe having a company or something but not one that much affects p(FAI). think this would still happen if they heard about it from a young age, they would follow a similar trajectory but with FAI themed wallpaper. wouldnt be able to do simple utilitarian calculations like yudkowsky, salamon, vassar, tomasik about whether to have a baby and then execute on them.
> [...]
> people keep reframing what i say in the language of obligation. “altruists cant have kids?” “is it OK to have babies if”. there is no obligation, there is strategy and what affects p(fai). having kids and reorienting your life around them is 1 evidence about your algorithms 2 your death as an optimizing agent for p(fai) except maybe some contrived plot involving babies, but afaict there is no plot. just the reasons humans usually have babies.
> not having kids is not some sort of mitzvah? i care about miri/cfar’s complicity in the baby-industrial complex and rerouting efforts to save the world into powering some kind of disneyland for making babies, to sustain this. because that ruins stuff, like i started out thinking that bay area rationalists probably had deeply wise reasons to have babies. but it turned out nope, they kinda just gave up.
> like also would say playing videogames for the rest of your life wont usually get you fai. i dont get why everyone casts this as a new rule instead of a comment on strategy given a goal of p(fai).
> ah i know, its because people can defend territory in “is it okay to have kids” like “yeah i can do whatever” when they reframe-warp me to giving them an obligation. but have no defensible way to say “my babyvault will pierce the heavens and bring god unto the face of this earth” or argue about the strategic considerations.
> (its not defensible because its not true. i mean i guess it is defensible among julia wise’s group of humans.)
> [...]
> when you can continually rebase your structure so you orient towards world outcomes instead of being prisoner to existing structure like “i cant help having babies im miserable if i dont, im a baby addict” or “i cant help being afraid of needles”. like the human brain is two optimizing agents continually making contracts with each other, there arent things outside this. you are an optimizing agent, “fear of needles” is a heuristic that helps with optimization, so is “baby addiction”.
> the people who are trying their best using all available but leaving room at the end for their accustomed babies / ~nubile fems~ / career advancement are tunneled in this way. there are good people in ea like brian tomasik but they are vastly outnumbered and out steered by those who want a nice life under the regime with new wallpaper. and use ea as a new scene to meet their mates.
> –
> i btw think the archetypical male sexual strategy is more directly harmful to humans on net. but both end with people gaslighting you about how to save the world in a way warped by them all wanting to be able to do sexual reproduction things. its frustrating how the ppl running female sexual strategy to have a stable environment to raise their children are like ‘omg the men who say they need the vital essence of numbile fems so they can concentrate and have a will to life and self confidence to save the world are obviously bullshitting…“ "yep!” “…obviously what we objectively need is a stable environment and community oriented towards having babies. the human species will have an issue with people producing below replacement rates, having babies is natural, its important for people to have investment and community in EA. and if they dont have babies people will be sad.” like no! you are just as bullshit at the male sexual reproduction gaslighting!!!
> im not going to join you on team female sexual reproduction just because i happen to be mostly female and team male sexual reproduction is horrifying! i hate the sexual reproduction wars which is mostly cis ppl duking it out over babies.
> –
> someone wrote a comment on greaterwrong where they openly talk about what this sort of warp looks like in them:
> <<My genes followed their programming, allowing my mind to make up rationalizations on the way. Here are some clever rationalizations for you:
> Having kids is like cooperating in a multigenerational Prisonner’s Dilemma. If you think your existence is a good thing, remember that someone paid the cost of that. Paying the cost of having other people like you exist is how you reciprocate.
> If you are awesome and your partner is awesome, by making kids you make the humanity and the entire world more awesome!>>
> this sort of apparent logical reasoning usually not done with the preface saying “My genes followed their programming, allowing my mind to make up rationalizations on the way. Here are some clever rationalizations for you:” chained together to each other and built upon each other is the foundation of most of the fake EA’s sober decisionmaking.
> like this is bullshit the people who say this sort of stuff know its bullshit but they get together with others to gaslight you saying their advice is oriented towards saving the world when it isnt at all and after a while it becomes clear what its for.
> anyone who has the ability to internalize logical forks should decide which of the two they think is directly lying. instead of like not wanting to ever argue about anything important again because its Babytime™ and they decided to be Useless Adults in the YA sense of the term.
> (Useless Adults is truth in fiction. people come into this world wanting everything and the extent to which they give up and stop fighting for it is the extent to which their agency has decayed. and having babies almost always isnt a clever plot to take over the world, so they gave up on taking over the world, so most adults filtered for becoming parents, that is most parents, are far less agentic than non-adult children they claim as “theirs”.)
> anna salamon worked with kelsey piper, assuming the words she said about wanting to save the world were fake. she works with people, expecting them to not have free will. expecting them to not even pass the marshmellow test. (you could have 6 babies *after* the singularity.)
Good question! I honestly don't really know. Someone from DeepMind contacted me a while back asking if I'd consider putting together a website with resources to help folks leave the TESCREAL movement. Still haven't gotten around to that, but seems like a good idea!
I don't know if this will be mentioned in your upcoming book, but I'd note that both types of antinatalism presume that there will be no differentiation in consciousness (however it is defined) between AGI and human intelligence. If that matters to these folks, the assumption will be objectively tested soon, IMO. If it doesn't ... well that's a different matter entirely.
Ah, very interesting! Why do you think that traditional antinatalists assume that AGI could be conscious? And how do you think consciousness in AGI might be objectively tested? This is a topic I'm very interested in -- to my knowledge (having read, e.g., Susan Schneider's work on "consciousness engineering"), there aren't any good tests for consciousness in AI systems. This seems like a big problem -- a topic that I highlighted in a short book chapter (absolutely zero obligation to read, of course!!): https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_4d658956c8f945bb9b3fb6bee1ce2762.pdf
Émile - delayed in reply as 'commenting' was broken when I first saw this and then didn't get back. First - my assumption about trad antinatalists was unfounded, so thanks for that question. On the second question, I spent 50 years designing chips for computers and believe our analog-computer brain will eventually be emulated digitally to the point we won't be able to differentiate (Schneider will fail, imo). I also believe human consciousness will be found to be non-physical, with an objective, replicable measurement in the lab that AI will not be able to pass, but humans will. I give it <5 years. For an engineering rationale, see: https://www.youtube.com/watch?v=zUvVUiaQjNc Kind regards - Ron
Okay antinatalists, pro-extinctionists, you first. I think Jane Goodall had the right idea of putting all these wankers in a rocket and launching them into space. Since they are so fond of dictating what happens to their fellow humans they should have no objection to the rest of us making that choice for them.
LOL!! Yeah, that interview with Goodall was so powerful and poignant. Re: sending the billionaires to Mars, the idea is featured in the theme song to my podcast Dystopia Now, lol! https://wonderfulsome.bandcamp.com/track/hunger-strike
Living in a world led by billionaire autists sucks.
A world led by their spoiled fatherless offspring will be a nightmare.
People who are so far removed from the human experience should not have this much power.
Agree.
Replacement antinatalism was also a core tenet of the Zizian cult. I can dredge up the relevant quotes when I'm home.
Oh, fascinating! I didn't know this. Yes, please pass along the quotes, if you get a chance. I'd really appreciate that! Thanks for reading, Matrice. :-)
https://sinceriously.blog-mirror.com/choices-made-long-ago/:
> Arguably, a lot of ideas shouldn’t be argued. Anyone who wants to know them, will. Anyone who needs an argument has chosen not to believe them. I think “don’t have kids if you care about other people” falls under this.
https://sinceriously.blog-mirror.com/gates/:
> At one point, I saw a married couple, one of them doing AI alignment research who were planning to have a baby. They agreed that the researcher would also sleep in the room with the crying baby in the middle of the night, not to take any load off the other. Just a signal of some kind. Make things even.
> And I realized that I was no longer able to stand people. Not even rationalists anymore. And I would live the rest of my life completely alone, hiding my reaction to anyone it was useful to interact with. I had given up my ability to see beauty so I could see evil.
https://sinceriously.blog-mirror.com/the-matrix-is-a-system/:
> The reindeer quote makes an apt description of how sexual meaning cannibalism projects bondage. (It projects epistemic food poisoning into any intellectual project where your activities are also being used for that purpose.) You can trace political influence, from the Bay Area overlords who control safe stable housing for extortionate regulatory-captured tribute, to the mothers wanting to raise children in that housing, to the rest of the local “Effective Altruist” and “rationalist” communities.
https://somnilogical.tumblr.com/post/190204175679/what-fills-the-abyss:
> ||| things like “i am just fundamentally addicted to having babies, so to be altruistic i have to have them or i cant work” or “i need to have sex with young women so i can keep my confidence up and then later be super altruistic”.
> |||| two sides of the same coin, breeding.
> | when i came to the bay area i was really confused why people would say things like “if you think the odds look too grim enjoy the time you have left” at winter solstice. or “smart people having babies is actually a galaxy brain way to save the world, the babies will save the world. you know, the baby army.”
> || in a world in which the tradition and narrative by the majority of humans wasnt such that “have babies” was just a set thing you did. saying you are altruistic and then diverting your life path to spend an intense amount of time and energy and money on one particular human because they are genetically related to you, would seem like it would need justification.
> [...]
> in a world like this, their are attempts at every level to bury what someone running utilitarianism and altruism on their brains looks like. by the majority of humans who want to have something like “im def altruistic i swear i can prove it by all these metrics ppl around me has agreed mean that im working for the good of the world. im also totes addicted to babiez. *peace sign*”
> when i say this sort of stuff people react like im trying to set up a norm where people are socially ostricisized. but like how would that work when social is overwhelmingly made up of people who made this exact choice?
> im talking with humans who are aberrations from those who choose to be what the majority coercively designates to be “human”.
> i dont think everyone who chooses to not have babies has a neurotype such that they wouldnt zap someone in the milgram or be able to die rather than reveal the location of the rebel base. i dont even think its a particularly good indicator. i think both are indications of choosing not to work for the good of the world in a way sufficient for the doom our planet faces.
https://somnilogical.tumblr.com/post/190212305759/modular-ethics:
> the “altruists” having babies thing is actual insane and pasek is right about that.
> [...]
> CFAR and EA will do things like [...] creating a social reality where people with genetic biases who personally devote massive amounts of time and money to babies who happen to be genetically related to them and then in their day job act “altruistically”. as long as it all adds up to net positive, its okay right?
> [...]
> to save the world it doesnt help to castrate yourself and make extra super sure not to have babies. people’s values are already what they are, their choices have already been made. these sort of ad-hoc patches are what wrangling an unaligned agent looks like. and the output of an unaligned agent with a bunch of patches, isnt worth much. would you delegate important tasks to an unaligned AI that was patched up after each time it gave a bad output?
> it does mean that if after they know about the world and what they can do, people still say that they specifically should have babies, i mark them as having a kind of damage and route around them.
> someone not having babies doesnt automatically mark them as someone id pour optimization energy into expecting it to combine towards good ends. the metrics i use are cryptographically secure from being goodharted. so i can talk openly about traits i use to discern between people without worrying about people reading about this and using it to gum up my epistemics.
https://somnilogical.tumblr.com/post/190213488194/im-having-a-baby:
> imagine a world optimizer for good took control of your body. what would you expect it to do? would you anticipate that it would start expending a lot of time and money creating humans that happen to share their genes and then caring for them?
> if you never heard of “having children” would doing this cluster of thing at all make sense as an instrumental step to taking control of the reins of the world and threading it through a narrow region in possibility-space?
> children are an enormous sink of time and money and change in life path orientation. in a counterfactual world where birthing children werent a Thing You Do By Default, like “eating meat” or “working for google” are is in this world, it would seem really weird and confusing to see someone start doing this in the middle of saving the world.
> if you grew up in a world threatened as this world is threatened and all of the true heros keeping sentient life from being extinguished had no children, would you consider it an improvement to the fate of the world to clip into an au where they all started having babies and raising them and orienting their lives around them? would you press a button to change from the world you knew to that one?
> when i imagine what a true hero would do in a world such as this and ask if they would have kids, i get back the answer “no”.
> not everyone is cut out to be a true hero.
> –
> like if you were someone who actually cared about each and every instantiation of sentient life would you choose for the one and only human you controlled to have babies and raise them? or would that seem like an enormous waste of time and effort?
> if you were reading a book and a hero lived in a world such as this and they were perhaps the worlds only hope, would you be yelling at the page for them to have babies? desperately trying to get them to realize that you need to have and care for genetic offspring or the world will fall? that so much depends on instantiating humans who happen to share some of the DNA of the hero?
> i dont expect that people who have chosen to have children to be able to see how this is absurd behaviour for someone with altruistic values (chooses to optimize for the good of all life) to exhibit.
https://somnilogical.tumblr.com/post/614630437670862848/davis-tower-kingsley-listed-here-on-the-cfar:
> like someone said i sounded like i was crazy and homeless and couldnt understand me when i pointed out that reorienting your life, your time, your money, to a human who happens to be genetically related to you for 16 years is altruistic insanity. just do the math. eliezer, anna, michael, brian tomasik all once took heroic responsibility for the world at some point in their lives and could do a simple calculation and make the right choice. none of them have children.
> pretending that peoples “desires” “control them”, when “desires” are part of the boundary of the brain, part of the brains agency and are contingent on what you expect to get out of things. like before stabbing myself with a piece of metal would make me feel nauseated, id see black dots, and feel faint. but after i processed that stabbing myself would cure brain damage and make me more functional, all this disappeared.
> most people who “want” to have children have this desire downstream of a belief that someone else will take heroic responsibility for the world, they dont need to optimize as much. there are other competent people. if they didnt they would feel differently and make different choices.
https://somnilogical.tumblr.com/post/614647902815535104/notice-that-i-if-you-write-things-about-al:
> i give arguments why having kids is insane for an altruist, that the people who took heroic responsibility for the world at some point in their lives {yudkowsky, salamon, vassar, tomasik} didnt have kids, no one gives good counter-argument.
> [...]
> wrt babies, you said humans arent robots? but yudkowsky, salamon, vassar, tomasik arent robots and they got this right. that its wrong to use altruist in this way because it demotivates people. i disagreed with this in legally blind, i dont misreport category-structure for ev over a community. this is myopic optimization and that way leads ruin. like the ruin of a chatroom of transfems deciding by fiat that everyone is "cute", but you cant actually do economics that way and the value of "cute" plummets and people feel insecure. and that maybe it makes sense if you arent taking heroic responsibility for the world, if there were other people. but that makes you a different kind of agent than say an altruist-platonist that yudkowsky talked about.
> http://extropians.weidai.com/extropians/0302/2567.html
> didnt convince me that altruists taking heroic responsibility for the world should have children. still is insane to me.
https://somnilogical.tumblr.com/post/617398389642051584:
> like if someone actually had a plan for FAI that involved this, okay. but rn time is too short imo. when i first heard people were having babies i was confused and assumed they were going to harvest the DNA of the best FAI researchers, someone would decide to grow a baby inside them, someone who discounted their ability otherwise to save the world except via this or thought this was a sacrifice worth making for the world would decide to raise this human.
> the human can access information about the state of the world and make their own choices. wont necessarily become an FAI researcher.
> used to think that intelligence was the main bottleneck on FAI research no longer think this. you could talk with terry tao for hours about the dangers of the wrong singleton coming to power but unless you have made some advances i have not, i wouldnt expect to be able to align him with FAI research. he would continue to put as much resistance to his death and the death of everyone as a pig in human clothing. he would continue to raise his babies and live in a house with someone he married and write about applying ergotic theory to the analysis of the distribution of primes and understanding weather patterns.
> similarly, i dont think culture is a sufficient patch for this. think its a neurotype-level problem where a bunch of >160 iq humans hear about the dangers of UFAI and then continue to zoom quickly and spiral in to being ultra efficient at living domestic lives and maybe having a company or something but not one that much affects p(FAI). think this would still happen if they heard about it from a young age, they would follow a similar trajectory but with FAI themed wallpaper. wouldnt be able to do simple utilitarian calculations like yudkowsky, salamon, vassar, tomasik about whether to have a baby and then execute on them.
> [...]
> people keep reframing what i say in the language of obligation. “altruists cant have kids?” “is it OK to have babies if”. there is no obligation, there is strategy and what affects p(fai). having kids and reorienting your life around them is 1 evidence about your algorithms 2 your death as an optimizing agent for p(fai) except maybe some contrived plot involving babies, but afaict there is no plot. just the reasons humans usually have babies.
> not having kids is not some sort of mitzvah? i care about miri/cfar’s complicity in the baby-industrial complex and rerouting efforts to save the world into powering some kind of disneyland for making babies, to sustain this. because that ruins stuff, like i started out thinking that bay area rationalists probably had deeply wise reasons to have babies. but it turned out nope, they kinda just gave up.
> like also would say playing videogames for the rest of your life wont usually get you fai. i dont get why everyone casts this as a new rule instead of a comment on strategy given a goal of p(fai).
> ah i know, its because people can defend territory in “is it okay to have kids” like “yeah i can do whatever” when they reframe-warp me to giving them an obligation. but have no defensible way to say “my babyvault will pierce the heavens and bring god unto the face of this earth” or argue about the strategic considerations.
> (its not defensible because its not true. i mean i guess it is defensible among julia wise’s group of humans.)
> [...]
> when you can continually rebase your structure so you orient towards world outcomes instead of being prisoner to existing structure like “i cant help having babies im miserable if i dont, im a baby addict” or “i cant help being afraid of needles”. like the human brain is two optimizing agents continually making contracts with each other, there arent things outside this. you are an optimizing agent, “fear of needles” is a heuristic that helps with optimization, so is “baby addiction”.
https://somnilogical.tumblr.com/post/619268472432721920/somni-probably-you-want-the-ability-to:
> the people who are trying their best using all available but leaving room at the end for their accustomed babies / ~nubile fems~ / career advancement are tunneled in this way. there are good people in ea like brian tomasik but they are vastly outnumbered and out steered by those who want a nice life under the regime with new wallpaper. and use ea as a new scene to meet their mates.
> –
> i btw think the archetypical male sexual strategy is more directly harmful to humans on net. but both end with people gaslighting you about how to save the world in a way warped by them all wanting to be able to do sexual reproduction things. its frustrating how the ppl running female sexual strategy to have a stable environment to raise their children are like ‘omg the men who say they need the vital essence of numbile fems so they can concentrate and have a will to life and self confidence to save the world are obviously bullshitting…“ "yep!” “…obviously what we objectively need is a stable environment and community oriented towards having babies. the human species will have an issue with people producing below replacement rates, having babies is natural, its important for people to have investment and community in EA. and if they dont have babies people will be sad.” like no! you are just as bullshit at the male sexual reproduction gaslighting!!!
> im not going to join you on team female sexual reproduction just because i happen to be mostly female and team male sexual reproduction is horrifying! i hate the sexual reproduction wars which is mostly cis ppl duking it out over babies.
> –
> someone wrote a comment on greaterwrong where they openly talk about what this sort of warp looks like in them:
> <<My genes followed their programming, allowing my mind to make up rationalizations on the way. Here are some clever rationalizations for you:
> Having kids is like cooperating in a multigenerational Prisonner’s Dilemma. If you think your existence is a good thing, remember that someone paid the cost of that. Paying the cost of having other people like you exist is how you reciprocate.
> If you are awesome and your partner is awesome, by making kids you make the humanity and the entire world more awesome!>>
> https://www.greaterwrong.com/posts/kkWYTdiHBosm8TcNF/what-was-your-reasoning-for-deciding-whether-to-raise/answer/kPCHctexK4tpuG8rF
> this sort of apparent logical reasoning usually not done with the preface saying “My genes followed their programming, allowing my mind to make up rationalizations on the way. Here are some clever rationalizations for you:” chained together to each other and built upon each other is the foundation of most of the fake EA’s sober decisionmaking.
> like this is bullshit the people who say this sort of stuff know its bullshit but they get together with others to gaslight you saying their advice is oriented towards saving the world when it isnt at all and after a while it becomes clear what its for.
https://somnilogical.tumblr.com/post/643210133482110976:
> anyone who has the ability to internalize logical forks should decide which of the two they think is directly lying. instead of like not wanting to ever argue about anything important again because its Babytime™ and they decided to be Useless Adults in the YA sense of the term.
> (Useless Adults is truth in fiction. people come into this world wanting everything and the extent to which they give up and stop fighting for it is the extent to which their agency has decayed. and having babies almost always isnt a clever plot to take over the world, so they gave up on taking over the world, so most adults filtered for becoming parents, that is most parents, are far less agentic than non-adult children they claim as “theirs”.)
https://somnilogical.tumblr.com/post/649473595035222016:
> anna salamon worked with kelsey piper, assuming the words she said about wanting to save the world were fake. she works with people, expecting them to not have free will. expecting them to not even pass the marshmellow test. (you could have 6 babies *after* the singularity.)
Wow, absolutely fascinating. Thanks so much for this!
"Increasingly, a lot of [AI researchers in Palo Alto] believe that it would be good to wipe out people"
How does one encounter these P.A. people? Do they have meetings, or newsletters? Serious question
Good question! I honestly don't really know. Someone from DeepMind contacted me a while back asking if I'd consider putting together a website with resources to help folks leave the TESCREAL movement. Still haven't gotten around to that, but seems like a good idea!
I don't know if this will be mentioned in your upcoming book, but I'd note that both types of antinatalism presume that there will be no differentiation in consciousness (however it is defined) between AGI and human intelligence. If that matters to these folks, the assumption will be objectively tested soon, IMO. If it doesn't ... well that's a different matter entirely.
Ah, very interesting! Why do you think that traditional antinatalists assume that AGI could be conscious? And how do you think consciousness in AGI might be objectively tested? This is a topic I'm very interested in -- to my knowledge (having read, e.g., Susan Schneider's work on "consciousness engineering"), there aren't any good tests for consciousness in AI systems. This seems like a big problem -- a topic that I highlighted in a short book chapter (absolutely zero obligation to read, of course!!): https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_4d658956c8f945bb9b3fb6bee1ce2762.pdf
Thanks for sharing these thoughts, Ron!!
Émile - delayed in reply as 'commenting' was broken when I first saw this and then didn't get back. First - my assumption about trad antinatalists was unfounded, so thanks for that question. On the second question, I spent 50 years designing chips for computers and believe our analog-computer brain will eventually be emulated digitally to the point we won't be able to differentiate (Schneider will fail, imo). I also believe human consciousness will be found to be non-physical, with an objective, replicable measurement in the lab that AI will not be able to pass, but humans will. I give it <5 years. For an engineering rationale, see: https://www.youtube.com/watch?v=zUvVUiaQjNc Kind regards - Ron