No Ethical AI in 2025
Terms
- AI = Artificial Intelligence. There are many kinds of artificial intelligence, but in 2025 AI most often refers to generative AI.
- Generative AI = any AI that generates content (text, images, audio, etc). This is the term I will use the most.
- Training data = an enormous collection of data (text, images, etc) that is used to train an AI model.
- AI model = the model used by AI, which has been trained on a large set of training data.
- LLM = large language model. This is the most common type of generative AI.
Introduction
There is no way to use generative AI ethically in 2025. All of the training data and models running today suffer from the same set of flaws:
- they are unethically sourced
- they have a terrible impact on the environment
- they perpetuate and exacerbate biases, causing harm
I’m not against generative AI conceptually! I would be curious to try using a generative AI that addresses these issues. That is, an AI that:
- is trained on ethically sourced data
- runs with a minimal impact on the environment
- has methodically reduced/eliminated its perpetuation of biases
I also have issues with appropriate use of generative AI — but I firmly believe that there is NO appropriate use of our current unethical generative AIs.
I haven’t touched any generative AI myself (I’m a self-proclaimed “never-LLMer”), but I have spent a significant amount of time thinking about and discussing appropritate/inappropriate use of AI. My views expressed in this article do not represent any of the following groups/people — but they have informed my own views on this topic.
- I am part of the Organization of Ethical Source, where we have a working group for discussing the ethics of AI.
- In my day job at the State of Maryland, I helped implement our team’s AI policy, which states that ALL use of AI by our team members must be accompanied by an appropriate citation (even for “minor” “brainstorming” uses).
- I have frequent discussions with a friend: a Doctor of Philosophy who works at Hopkins studying ethics.
- People know I am interested in this topic, so I’m constantly being sent articles about the ethics of AI.
In the next sections, I’ll go over some details and examples for each of my criteria for what makes AI use ethical or unethical.
Factors
1. Ethically sourced data
I wrote a book a few years ago, Debugging Your Brain. To see whether my source material was unethically sourced, a friend of mine looked up some concepts that I coined in my book. One in particular — I had written up some more-memorable names for Marcia Linehan’s Six Levels of Validation. My framing of this model appears in some training models. I have not authorized this use of my work, and I have not been compensated at all, and I rarely even credited in these AI tools.
My work is certainly not “common knowledge.” Using this language requires a citation or reference. A student who repeatedly failed to cite their sources would be disciplined for ethical concerns — why should an algorithm be allowed to do this unchecked?
A more well-known example happened earlier this year when ChatGPT started creating art in the style of Studio Ghibli — and Hayao Miyazaki, the creator of Studio Ghibli, was in the news a lot protesting this.
Ethically sourced training data would have to be opt-in by content owners (authors and/or publishing companies), or rely on material where the copyright has expired. But currently, content owners cannot even opt OUT of their writing being used for training. There is a proposed standard for websites to be able to opt out of being used as training materials, but AI companies generally do not respect this. Nothing is preventing companies like ChatGPT from ignoring these requests. Since we cannot opt out of being used as training data, some people protest being used by intentionally feeding the bots garbage content.
Critics of this “must be ethically sourced” criteria claim that it would be impossible to create AI models WITHOUT the unethical gathering of training material. They claim that ethics are prohibitive. They claim that, since creating AI models like these is inevitable, these ethical concerns should be put aside.
I strongly believe that these ethics must not be disregarded. If these tools cannot be created ethically, then they should not exist.
2. Environmental Impact
The models being used have a huge impact to the environment in several ways. Enormously increased electricity usage and water usage. Increased CO2 emissions, which exacerbates climate change.
The creation of new data centers have a bad environmental impact, too. Recently, residents of Prince George’s County successfully fended off the construction of a data center that would have polluted the area.
Areas that have new data centers like these created have had a surge in the cost of electricity to others. This increased cost is passed off to individual residents, even if they don’t use AI themselves. The impacts of this are being seen in the Baltimore area, with at least part of the rise of costs from this sort of increased usage.
3. Reducing Bias and Preventing Harm
These models encode biases from the source materials, and a ton of care would need to be taken to mitigate this huge risk. Unfortunately, this care is not being taken.
One example: this week At Kenwood High School (the high school I went to in Baltimore County), an AI camera system had a false positive. The system falsely identified a bag of doritos as a gun. I can’t prove this particular case, but it seems incredibly likely that the AI system profiled the black student as a likely target based on his skin color. Police were automatically summoned to the school, guns drawn.
There is a more well-known example of this, when a teen committed suicide following use of ChatGPT earlier this year. This happened after Open AI removed safeguards for this sort of thing. Generative AI should never encourage self-harm. ChatGPT failed at this, with disastrous consequences.
Racial profiling and encouragement of suicide are unacceptable. These are easily predictable side effects of generative AI, that must be addressed. Any generative AI that doesn’t prevent these kinds of bias and harm should not be considered ethical.
4. Issues with Use
If we had a generative AI model that was ethically sourced, environmentally friendly, and reduced bias, I would consider using one. In the meantime, other people ARE already using AI all around me. I have additional ethical concerns for the USE of AI, which are separate from whether certain AI models should exist in the first place.
My criteria for ethical use of generative AI:
- All use of AI must cite AI as a source. People deserve to know what parts of your writing are your own original thoughts, or the regurgitated thoughts of a machine.
- Beyond this, I also give special weight to any writing that includes a disclaimer that “no AI was used!” All examples of writing I’ve seen so far have been excellent, comprehensible, no-filler writing.
- You are responsible for every word of your writing, even if AI assists you in writing it. You are responsible for every phrase of every sentence. If you write something that causes harm, you should not get a pass because the computer did that. Editors should still review every phrase as if a human wrote it. I like the phrase from IBM: “A computer can never be held accountable, therefore a computer must never make a management decision.”
- You should never outsource thinking to the computer. There are preliminary studies that show that students who rely on Generative AI end up with less brain activation, and less critical thinking skill.
In many areas of my life (work, volunteering, social relationships), I’ve encountered people failing these criteria of use. Most people do not cite their use of AI. Many people don’t take responsibility for their own writing (e.g. “oh that part was ChatGPT, lol”). Many people DO outsource their thinking to a generative AI, and then adopt those ideas as their own. This is irresponsible use of AI. These poor decisions may also be unethical, depending on the situation.
Allyship, Advocacy, and AI
If you want to be an ally to artists and content creators, you should not support these AI models that have stolen from artists and content creators. If you want to be an ally to the environment, and places that suffer disproportionately from climate change, then you should not support these AI models that necessarily have a terrible impact on the environment. If you want to be an ally to people of color, then you should not use these AI models that propagate biases and lead to harm. If you want to be an ally to people with mental health issues, you should not use AI models that are insensitive to mental health issues, causing irreparable harm.
If you believe that Target or Wal-Mart are doing unethical processes, then you should boycott them. But if you live in a certain location, Wal-Mart might be the only store you can access to get essential items. It’s harder to boycott something you depend on!
AI is different. Nobody has to use AI. If you believe, like me, that our current AI models are unethical, then you can choose, like me, to boycott them. This sort of behavior would make you a much stronger ally to the groups I listed above.
Conclusion
Unfortunately, there is not a single Generative AI model in 2025 that is ethical enough for me to use. I take a hard stance — refuse to engage with any unethical generative AI models myself, which in 2025 is all of them. If we do have an ethical AI model someday, then I would consider using it. I would make sure to use it responsibly.
But in 2025, I absolutely do not and will not use generative AI.
Additional References
Here are two longer write-ups by well-respected organizations in tech, critiquing and defining the ethical use of AI. These are discussion materials used by a group I am a part of, the Organization for Ethical Source.