

Week 6: Rethinking Ethics in AI Native Systems
AI is no longer just supporting our systems. It’s starting to run them. From LLMs embedded in customer service workflows to agentic systems automating software deployment, we are delegating not just tasks, but decisions.
Fast ones. Strategic ones. Sometimes irreversible ones.
So here’s the real question technical leaders face: "When your AI-powered system makes a decision you can’t explain, and it backfires, what do you tell your team, your customer, or your regulator?"
There’s No Such Thing as an Ethical AI
AI systems don’t understand fairness or responsibility. They don’t choose, they calculate. What we call “bias” is usually just a reflection of training data. And in most production systems, the internal logic is too complex to trace.
The ethical risk isn’t in the model itself. It’s in the system that surrounds it: the infrastructure, the oversight, the assumptions, and the lack of safeguards.
And those gaps are growing:
Opaque logic: Many systems today can't explain why a decision was made. Even basic explainability is missing. EU AI Act Risk Categories – European Commission
Unclear ownership: AI-generated code and content may remix licensed or copyrighted material. It’s not always clear who owns what. MIT Technology Review on Copyright and AI
High compute costs: A 2019 paper estimated that training GPT-2 used 1,287 MWh—emitting ~552,000 kg of CO₂. This scale has only increased. Energy and Policy Considerations for Deep Learning
In most companies, there’s no process or role in place to deal with any of this.
Accountability Can’t Be Automated
Even if an AI writes code, chooses content, or executes a workflow, it’s still the team’s decision to use it. The delegation is human. So is the risk.
If a system makes a critical mistake, it’s not enough to say, “The model did it.”
You still need to know:
- Why the system chose that action.
- Whether it violated policy, law, or expectation.
- And how to fix it or stop it before it happens again.
Without clear architectural support, none of that is possible.
Five Practices to Build Ethical AI Native Systems
Ethical responsibility must be reflected in the system design.
1. Explainability by Default
All major AI decisions should be traceable and interpretable—by design, not as an afterthought. If no one can explain it, no one can control it.
2. Audit Trails
Every high-impact decision (e.g. model-generated output, agent behavior) should be logged with inputs, context, and justification. EU AI Act Article 13 – Transparency Requirements
3. IP Attribution Tools
Use generation tools that track source and licensing. Don't deploy outputs you can’t verify. MIT Technology Review – AI and Copyright
4. Ethics Guardians
Assign human reviewers in workflows that affect customers, finances, or operations. Someone must have authority to intervene.
5. Track Compute Waste
Idle GPUs and inefficient inference cycles burn real resources. Add energy budgets to your metrics and shut down what you’re not using. Strubell et al., 2019
The Hidden Cost of Not Knowing
The real risk isn’t failure, it’s not knowing why it failed.
If you can’t trace system behavior, assign responsibility, or explain outcomes, you’ve already lost control. You can’t debug it. You can’t justify it. And eventually, you won’t be allowed to run it.
This is where AI Native principles must evolve:
- Clear audit logic
- Architectural transparency
- Human-led oversight
That’s what accountability by design means. It’s not just about trust. It’s also about having control.
About Waves of Innovation
This newsletter is a weekly signal for technical leaders navigating the shift from Cloud Native to AI Native systems. We’re here to explore the real engineering implications of this transition, beyond the hype.
If you care about how systems are built, led, and governed in this next wave, you’re in the right place.
Questions for Reflection
- Can your team explain how a high-risk AI decision was made?
- Who in your org has the authority—and obligation—to intervene?
- How confident are you that your AI system respects IP and regulation?
Key Takeaways
- AI can’t be ethical—but systems built around it can.
- Without explainability and ownership tracking, accountability disappears.
- Ethics must be implemented through architecture, not just principles.