Ethical AI Policy
Future’s Edge ethical AI policy
Section titled “Future’s Edge ethical AI policy”Version 1.0 — February 2026
Draft. Yet to be ratified by community vote.
Next scheduled review: March 2026
A note before you read this
Section titled “A note before you read this”This document was built from the inside out — starting with our values, listening to our stakeholders, mapping both the harms and the genuine good that AI can bring, and only then writing the rules.
It is written for everyone this policy affects: a young member in Nairobi discovering her strengths, a small business owner in regional Australia trying to do the right thing, a community whose data we hold, a partner organisation trusting us with their most vulnerable beneficiaries.
If you are one of those people and something in here is unclear, that is a failure of this document — not a failure of your understanding. Please tell us. This policy is a living document and your feedback improves it.
Part 1: Who we are and why this policy exists
Section titled “Part 1: Who we are and why this policy exists”Future’s Edge is a global, youth-led movement dedicated to empowering people — especially young people, under-served communities, and those in emerging economies — with the skills, tools, and opportunities to shape a fairer world.
We work at the intersection of technology, education, and governance. That means AI is not a peripheral concern for us — it is central to almost everything we do. We use it. We help others use it. We teach people to think critically about it. And increasingly, the communities we serve are affected by it whether they know it or not.
That is why this policy exists.
We believe trust is the single most important factor in AI for good. Not compliance. Not efficiency. Not capability. Trust. And trust is not claimed — it is built through consistent, transparent, human-centred action over time.
This policy is our public commitment to that work.
Part 2: Our core values
Section titled “Part 2: Our core values”These ten values are the foundation everything else in this policy is built on. They are not aspirational — they are operational. They describe how we behave, how we make decisions, and how we hold ourselves accountable.
1. Trust and transparency — No black boxes, ever. Every decision is open, accountable, and auditable.
2. Innovation with purpose — We build things that solve real problems for real people. Ethics and effectiveness are the same thing, not trade-offs.
3. Strength-based growth — Everyone has unique strengths. Our role is to amplify them, not standardise them away.
4. Ethical incentives and fair recognition — Contributions are recognised and rewarded fairly. Incentives align with community well-being, never manipulation.
5. Global citizenship and collaboration — We are a global-first movement. Knowledge belongs to the commons. Collective intelligence beats individual competition.
6. Lifelong learning and curiosity — Learning never stops. Our members are both students and teachers. Intellectual humility is a strength.
7. Good governance and ethical leadership — Governance is participatory, transparent, and inclusive. Leaders serve the community; they do not dictate it.
8. Diversity, equity, and inclusion — Different voices make better systems. We design for the margins, not the majority. Inclusion is designed in, not added on.
9. Human dignity and psychological safety — Grace is a design principle. We assume good intent. We support struggle and celebrate success.
10. Reversibility and continuous improvement — Good systems get better over time and allow for course correction. This policy is no exception.
Part 3: Our ethical AI principles
Section titled “Part 3: Our ethical AI principles”Each principle below names the harm it prevents and the benefit it actively pursues. Every principle has a human story behind it — drawn from deep listening to the communities this policy is designed to protect.
Principle 1: Trust is structural, not stated
Section titled “Principle 1: Trust is structural, not stated”We build AI systems that are verifiably trustworthy — not systems that claim to be.
We prevent compliance theatre and black-box decision-making. We pursue AI systems that any affected person can interrogate, understand, and challenge.
The test: Can any affected person see how this AI decision was made — and is there a genuine pathway to challenge it?
Principle 2: Human agency is non-negotiable
Section titled “Principle 2: Human agency is non-negotiable”AI augments human judgment. It never replaces the human voice in decisions that affect people’s lives.
We prevent accountability displacement — the tendency to say “the AI decided” when harm occurs. We pursue amplified human capability, where AI makes people more informed and more confident, and humans always make the final call where it matters.
The test: Is there a named human being accountable for this AI decision — and did a human being make the final call where it mattered?
Principle 3: Inclusion is designed in, not added on
Section titled “Principle 3: Inclusion is designed in, not added on”We co-design AI with the communities most affected — especially those historically excluded from the design table.
We prevent bias encoded as objectivity and the cultural erasure of communities whose lives don’t fit the dominant dataset. We pursue AI that genuinely breaks barriers — language, geography, background, and device.
The test: Were the communities most likely to be disadvantaged by this AI system meaningfully involved in designing it — and has it been tested for bias before deployment?
Principle 4: Dignity and grace are design requirements
Section titled “Principle 4: Dignity and grace are design requirements”We design AI systems that treat every person as a human being first — with generosity for failure, support for struggle, and celebration of success.
We prevent punitive, surveilling systems that lock in early failure and erode psychological safety. We pursue systems that assume good intent, offer generous iteration, and frame feedback as education.
The test: If a person is struggling, does this AI system support them — or penalise them?
Principle 5: Economic fairness is explicit and enforceable
Section titled “Principle 5: Economic fairness is explicit and enforceable”Every person who contributes to an AI-enabled system is compensated fairly, transparently, and without exploitation — regardless of where they live.
We prevent economic extraction disguised as opportunity and opaque compensation systems that exploit contributors in lower-income contexts. We pursue transparent, equal-opportunity pay with blockchain-verified contribution recognition that crosses borders.
The test: Would a contributor in Nairobi and a contributor in Sydney doing equivalent work be paid equivalently — and can both see exactly how their pay was calculated?
Principle 6: Privacy is respect
Section titled “Principle 6: Privacy is respect”We treat every person’s data with the same care we ask organisations to show their communities — collected minimally, used purposefully, and never weaponised.
We prevent invisible datafication and consent without comprehension. We pursue community-controlled data insights — AI that turns community data into community power.
The test: Does every person whose data this AI uses understand what it is used for — and do they genuinely benefit from that use?
Principle 7: Reversibility and continuous improvement are built in
Section titled “Principle 7: Reversibility and continuous improvement are built in”We build AI systems that get better over time, allow for course correction, and never make harms permanent.
We prevent irreversibility — harms easier to create than to undo, and policies that calcify into irrelevance. We pursue genuine feedback loops, public version histories, and AI tools that learn from every interaction.
The test: If this AI system makes a mistake, can it be corrected — and does that correction make the whole system better?
Principle 8: Open by default
Section titled “Principle 8: Open by default”Knowledge, tools, and frameworks that help communities use AI ethically belong to the commons — not behind paywalls.
We prevent the gatekeeping of practical AI knowledge and a two-tier world where only well-resourced organisations can afford to do AI responsibly. We pursue open-source frameworks, accessible pricing, and a KnowledgeBank that any organisation in the world can learn from.
The test: Can an under-resourced community organisation access the same quality of ethical AI guidance as a well-funded corporation?
Principle 9: Future’s Edge is its own proof of concept
Section titled “Principle 9: Future’s Edge is its own proof of concept”We hold ourselves to every standard we ask of others — visibly, verifiably, and without exception.
We prevent the hypocrisy of an organisation that teaches trust while operating without it. We pursue a Future’s Edge whose internal AI practices are as transparent and accountable as those we recommend to everyone else.
The test: Would Future’s Edge be comfortable if every stakeholder could see exactly how it uses AI internally?
Part 4: Member rights
Section titled “Part 4: Member rights”Every person who interacts with Future’s Edge — as a member, a community participant, a partner, or a beneficiary — has the following explicit, enforceable rights in relation to AI:
- The right to know — to be informed when AI is involved in any decision that affects you
- The right to understand — to receive a plain-language explanation of how any AI decision was made
- The right to challenge — to formally contest any AI-generated assessment, score, or decision
- The right to correct — to have errors in your data or AI-generated profile corrected promptly
- The right to opt out — to decline participation in AI systems where a human alternative exists
- The right to improve — to contribute feedback that genuinely feeds into system improvement
- The right to be forgotten — to request removal or anonymisation of your personal data, within the constraints of audit integrity
These rights are not conditional on technical literacy, language, geography, or membership tier. They apply to everyone.
Part 5: Our governance model
Section titled “Part 5: Our governance model”The three non-negotiables
Section titled “The three non-negotiables”Three commitments sit above all others in this policy. They cannot be traded away under any commercial, operational, or political pressure:
- Future’s Edge will never deploy AI that makes a community less heard, less safe, or less in control of their data
- Every AI decision affecting a person’s reputation, opportunity, or standing has a named human accountable for it
- The organisation’s own AI practices are always held to the same standard it asks of others
The AI use case register
Section titled “The AI use case register”Every AI tool and practice used within Future’s Edge is recorded in a public, auditable register — maintained on-chain. Each entry names the tool, what it does in plain language, what data it uses, who is accountable, how a human stays in the loop, when it was last bias-audited, and the outcome of its community impact assessment. The register is updated continuously and published openly.
The community impact assessment
Section titled “The community impact assessment”Before any new AI tool or practice is adopted, a community impact assessment is completed and published. It asks: who is affected and what do they need to trust? What could go wrong for the most vulnerable group? Is this structurally trustworthy or just compliant? Who is accountable? Are we doing this for them or for us? No AI tool is deployed without a completed, published assessment.
The ethics circle
Section titled “The ethics circle”A standing, community-elected body of five to seven people holds operational accountability for this policy. Composition requires at minimum two members from under-served populations or emerging economies, and at minimum one member under 25. The Ethics Circle reviews and approves all community impact assessments, conducts annual bias audits, investigates member concerns, and publishes an annual public accountability report. It is a community accountability mechanism — not a leadership committee.
The accountability chain
Section titled “The accountability chain”Community — ultimate authority via DAO vote ↓Ethics Circle — policy integrity and oversight ↓Named tool owner — day-to-day accountability ↓AI system — the tool, never the accountable party ↓Affected stakeholder — always has a pathway to challengeBias audit protocol
Section titled “Bias audit protocol”High-impact tools (reputation, opportunity allocation, governance) are audited every six months. Medium-impact tools (learning, content) annually. Low-impact tools (administrative) every two years. All results are published. Tools that fail an audit are suspended until remediated.
Part 6: How we behave — the behavioural standards
Section titled “Part 6: How we behave — the behavioural standards”Transparency
Section titled “Transparency”- All AI-assisted work is labelled at the point of delivery — always
- AI systems and decisions are explained in plain language any affected person can understand
Human oversight
Section titled “Human oversight”- Every AI tool has a named human owner before deployment
- Consequential decisions affecting reputation, opportunity, or standing always have a human in the final loop
Inclusion
Section titled “Inclusion”- No AI tool is used with community members until it has been bias-tested across the diversity of people it will affect
- Communities most likely to be affected are involved in designing or reviewing tools before deployment — not consulted after the fact
Dignity
Section titled “Dignity”- AI systems default to assuming good intent — first failures carry no penalty
- Automated language is always supportive, never punitive or cold
Economic fairness
Section titled “Economic fairness”- Every compensation figure generated by an AI system is accompanied by a transparent, plain-language breakdown
- Future’s Edge does not sell member data to third parties — ever
Privacy
Section titled “Privacy”- Data collection is justified in the community impact assessment — “we might find it useful” is not sufficient grounds
- Members can view all data Future’s Edge holds about them, at any time
Continuous improvement
Section titled “Continuous improvement”- Every AI tool has a genuine feedback mechanism — and that feedback demonstrably influences the tool’s development
- When an AI system causes harm, Future’s Edge acknowledges it publicly, explains what went wrong, and documents what changed
Self-accountability
Section titled “Self-accountability”- Every standard in this policy applies to Future’s Edge internally before it is asked of any client, partner, or member
Part 7: How this policy stays alive
Section titled “Part 7: How this policy stays alive”This policy is a living document. It changes as AI changes, as our community grows, and as we learn.
Version control
Section titled “Version control”Every change is published with a plain-language changelog: what changed, why, who proposed it, and the community vote outcome. The full version history is stored on-chain and permanently accessible. Nothing is deleted. Nothing is quietly updated.
Review calendar
Section titled “Review calendar”- Quarterly — Ethics Circle reviews the concern log and use case register
- Annually — Full community review; open amendment window; DAO vote on proposed changes
- On trigger — Any significant incident, major regulatory change, or community-raised concern initiates immediate review
- Every three years — Full first principles review, back to empathy mapping with current stakeholder groups
How to propose an amendment
Section titled “How to propose an amendment”Any member can propose a policy amendment through the standard DAO proposal process. Proposals are published openly, discussed in the community for a minimum of 14 days, and put to a community vote. Minor amendments require Ethics Circle approval. Major amendments require a supermajority community vote.
How to raise a concern
Section titled “How to raise a concern”If you believe this policy has been violated — or that an AI practice at Future’s Edge is causing harm — you can raise a concern through the platform at any time, anonymously if you prefer. All concerns are logged on-chain. The named tool owner responds within five business days. The Ethics Circle assesses whether a principle has been violated. Every concern is resolved and published. No concern is closed without a published resolution.
Part 8: Our commitment to the commons
Section titled “Part 8: Our commitment to the commons”Future’s Edge publishes the following as open-source resources, freely available to any organisation:
- This ethical AI policy, in full
- The community impact assessment template
- The bias audit protocol
- The use case register format
- The Ethics Circle charter
- The member rights framework
These resources are available in multiple languages, maintained by the community, and free to adapt with attribution. Our goal is not to be the only organisation doing this well — it is to raise the standard for everyone.
Our three-question test
Section titled “Our three-question test”Before every significant AI decision, Future’s Edge asks three questions — in order:
1. Who is the community here, and what do they need to trust? 2. Is this structurally trustworthy, or just compliant? 3. Are we doing this for them, or for us?
If all three answers are clean, we proceed. If any one is uncomfortable, that is the design problem to solve.
Signatories
Section titled “Signatories”This policy was ratified by the Future’s Edge founding community in February 2026. It is owned by the community — not by leadership — and is amended only through the processes described in Part 7.
Version history
| Version | Date | Summary of changes |
|---|---|---|
| 1.0 | February 2026 | Initial ratification |
Future’s Edge Ethical AI Policy v1.0 is published under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to adapt and redistribute it with attribution.