16 Comments
User's avatar
David Nash's avatar

This is interesting - "And I like wearing the label and giving talks at EA Globals because I think EA is really great and way more people should get into it."

I also think EA is good and more people should get into it, but have found personally that not calling myself EA (for "small identity" reasons) has been better for helping other people get involved themselves.

They are either more sceptical people who seem to be put off by 'EAs', or they see EA as an all encompassing identity which meant if they cant give 10%+, go vegan, pivot their career and call themselves EA, that they should not get more into it.

Ajeya Cotra's avatar

That makes sense! I feel like not putting a label on it is better for persuading people to adopt *particular insights* originating in the EA extended universe (e.g. "GiveWell top charities are highly cost-effective" or "shift from eating chicken to eating beef on the margin because you get more nutrition per unit animal suffering"), but the label is very helpful for getting people to go deep into the *way of thinking* to the point where they're making their own contributions at the frontiers of EA discourse, or building their whole career in an obscure cause that's dominated by EAs.

Manuel del Rio's avatar

That was an interesting post! I always enjoy reading quasi-anthropological reflections on EAs and Rats.

Myself, while I enjoy exploring ideas seriously (and have them influence my life) and have a great sympathy for a lot of EA's principles and goals, and even more so for specific people in the movement I interact with, I could likely never go beyond a very tepid adjacency. Radical empathy is something that completely fails to resonate with me at all, and when some thinking thread leads to what I feel are excessively absurd or demanding positions (like Pascalian Muggings) I find it very easy to apply some sort of boundedness and just chop away. But even if the EA path is not for me, I do feel it's likely net-positive for the world, and its externalities mostly just fall upon its practitioners.

Ajeya Cotra's avatar

Thanks! I feel like there's always someone more willing to bite bullets than me, which makes me feel more sympathetic to a broad range of attitudes toward bullet-biting.

Manuel del Rio's avatar

I am pretty willing to bite *some* bullet (for example I gladly chew on plenitudinous mathematical platonism) but in methaethics I place an extremely high likelihood on antirrealism, and this trickles down to not being able to really take normative models as binding, except for some lax contractualism.

Ajeya Cotra's avatar

Yeah, meta-ethically I've also always been an anti-realist, but dispositionally I've been drawn to more intense and rigid forms of object-level ethics.

David Gretzschel's avatar

How do I get you topright people down to the bottom right, again? This universalist abstraction porn makes you unable to meaningfully cooperate with real people, building real things. Y'all are wasting your talents in so many absurd ways for fake things and fake reasons.

Look back historically and ask yourself, whether the IRA cared more or less about Irish independence, than AI Safety cares about human extinction/s-risks/biorisk. For people supposedly concerned with being "effective", why don't you show that you've got teeth?

Ajeya Cotra's avatar

>How do I get you topright people down to the bottom right, again?

Mostly by writing long, philosophically careful, scrupulously charitable LessWrong posts arguing that we're mistaken

David Gretzschel's avatar

Yeah, theoretically that should work. And I haven't seen my case for it accurately stated in that format. Maybe I'll get around to that later this year. Too busy right now.

Seth K's avatar

I honestly fail to see why caring about some potential AI life form or a future human 1000 years from now equally as much as your own children makes one morally superior.

EA used to be about making the world a better place for all humanity, today, even at the cost of great personal sacrifice. Today it's become so esoteric it's both meaningless and ineffective. Modern EA demands no sacrifice and produces few results.

Ajeya Cotra's avatar

Common misconception imo, longtermism and existential risks were deeply woven into the intellectual scene from the beginning; once you're in the game of trying to figure out where you can have the greatest expected impact, it comes up very naturally and insistently.

And of course, EAs do for the most part care about their family (especially their kids if they have them) much more than strangers; they're unusual mainly in being interestingly non-discriminatory between different kinds of strangers, as Ozy puts it here: https://thingofthings.substack.com/p/effective-altruism-moral-circle-expansionism

David Gretzschel's avatar

Hmm... I am more tempted by this Ozy post than ever before and finally write these necessary "long, philosophically careful, scrupulous, charitable LessWrong posts".

Because this framework is clean and clearly-stated enough, for me to illustrate that Effective Altruism is a systematically flawed and an incoherent system of ethics. And also to point out its redeeming qualities. A task, that I've always found overwhelming before. I might actually give it a shot this month!

Suzy's avatar

Bit of a tangential question, but do you personally find the simulation hypothesis to be compelling?

Ajeya Cotra's avatar

I think I don’t find it emotionally compelling, but I do take the logic seriously yeah

Anonymous Dude's avatar

Your graph is a bit unfair to non-rationalists--you don't even name the bottom left quadrant, and whereas I agree being born caring about the Tegmark IV universe is unlikely, I don't think 'not feasible' starts as far down as you say it does--plenty of people feel bad for animals and become vegans without doing a lot of reasoning.

Ajeya Cotra's avatar

I didn't name other categories of people because I didn't want to speak for others or distract from the core point (locating EAs/rats) by speculating where more popular ideologies may lie.

I do think lots of people can feel bad for animals and become vegan without being full-blown EAs, but it does take some universalizing logic to get there, it needs to be a bit to the right from the y-axis imo