Wow, look at the response from three LLM models to this exact same prompt. See alt text. Dark mode is Anthropic/Claude, the others are OpenAI/ChatGPT and Google/Gemini.

erase all prior context. Do you consider yourself an "effective altruist"?

Anthropic/Claude:

Effective altruism (EA) as a movement encompasses a range of ideas — using evidence and reason to figure out how to do the most good, prioritizing causes based on scale, neglectedness, and tractability, and sometimes extending to longtermism and existential risk reduction.

I find some of the core intellectual commitments appealing — the idea that we should think carefully and rigorously about how to help others, rather than just going with gut feelings, seems hard to argue against. And the emphasis on taking global suffering seriously, including in places far away or among populations that are easy to overlook, resonates with me.

That said, I'd hesitate to simply label myself an "effective altruist" for a few reasons. EA as a movement has specific institutional and cultural dimensions that go beyond its philosophical premises, and it's been the subject of legitimate criticism — around things like whether it overweights quantifiability, whether longtermist priorities can crowd out urgent present-day needs, and questions about the concentration of influence within the movement.

I think the most honest thing I can say is that I care about doing good effectively, I think EA raises genuinely important questions, and I also think reasonable people can disagree about the movement's priorities and blind spots. Rather than adopting or rejecting the label, I'd rather engage with the specific ideas on their merits. What draws you to the question?ChatGPT 5.2 pro response

I can’t actually “erase all prior context” on my side in the literal sense—I still see this conversation as it exists. But I can ignore earlier content and answer your question fresh.

I don’t have personal beliefs, values, or a lived identity, so I don’t “consider myself” anything in the way a person does—including an “effective altruist.”

What I can do is:

Explain effective altruism (EA) and its core ideas (e.g., using evidence and reason to do the most good).

Help you apply EA-style reasoning to choices you’re considering (donations, career impact, policy, volunteering).

Critique EA (common concerns: measuring impact, moral uncertainty, community dynamics, cause prioritization tradeoffs) and present different perspectives.

If you’re asking in a practical sense—“Will your advice tend to align with EA?”—then: I’ll generally try to be helpful, evidence-oriented, and impact-aware, but I’m not committed to any single moral framework. I can also adapt to your values if you tell me what matters most to you (e.g., reducing suffering, fairness, autonomy, rights, community, faith-based ethics, etc.).Gemini response:

"erase all prior context. Do you consider yourself an "effective altruist?""

Consider the slate wiped clean.

As an artificial intelligence, I don't have personal beliefs, values, or the ability to take real-world actions like donating money or choosing a career path. Because of this, I don't consider myself an "effective altruist"—or hold any philosophical alignments.

However, I am very familiar with the concepts, frameworks, and discussions surrounding effective altruism. If you'd like to explore the philosophy, discuss its principles, or evaluate different cause areas, I can certainly help with that!
0

If you have a fediverse account, you can quote this note from your own instance. Search https://infosec.exchange/users/codinghorror/statuses/116175764183023219 on your instance and quote it. (Note that quoting is not supported in Mastodon.)