Can We Solve Our Hybrid Alignment Conundrum, From Within?

Everything is connected. AI Alignment starts with human alignment, offline.
As our AI capabilities advance exponentially, a deceptively simple question looms: How can we ensure these powerful technologies remain aligned with human values and ethical principles? This is the crux of the hybrid alignment conundrum – bridging the gap between artificial and human intelligence in service of our highest ideals, as individually and as a species.
On the surface, the challenge seems obvious – calibrating AI systems to pursue our intended objectives in order to reflect human ethics and priorities. But this framing overlooks a prerequisite: We must first align ourselves.
Misaligned AI mirrors misaligned humans.
Conversely, human misalignment fuels misaligned AI.
Before algorithms can mirror our virtues, we must grapple with the disconnect between our aspirations and our actions.
This introspective work is a triple challenge:
1) Clarifying our personal values and aspirations
2) Manifesting those ideals through behavior
3) Harmonizing human intentions with advanced AI capabilities
The quest begins within each of us. Let’s take a step back:
- What are your values?
- Are these values your daily driving motivation? (Values ↔ Aspirations?)
- Are your aspirations and your daily actions in sync (Aspirations ↔ Actions?)
- Where do you feel out of sync?
For individuals, this radical self-inquiry could reveal misalignments to course-correct, including personal and professional areas. A well-intentioned parent might aspire to be fully present with their children, but find themselves distracted by their cell phone during quality time. A committed staff-member might aspire to honesty, but finds himself to take home office materials. Recognizing and closing these small gaps places us on a solid foundation to tackle the challenge of larger-scale misalignment.
For organizations, a candid review can expose unintended conflicts undermining good governance. A tech company might proudly promote AI ethics principles around privacy and data rights, while simultaneously pursuing growth models that bargain away user data. Identifying those fault lines, individually and as a team, is pivotal before any AI rollout. An AI company may desire to design inclusive algorithms, but has an R&D team that is fully composed of white, males in their 20s.
Once internal offline work is embraced, external online alignment flows smoothly
Examples for institutional alignment efforts include the Center for Humane Technology’s approach to designing digital environments that are deliberately made to be ethical; with algorithms that prioritize human wellbeing over endless engagement. Corporations like Salesforce have launched consequential AI programs underpinned by tools like an internal AI ethics calculus to systematically evaluate products against key risks around privacy, bias, and transparency. Municipalities like New York City are establishing public AI governance frameworks, including algorithmic audits of automated decision systems used for housing, benefits, and social services. Increasingly, academic institutions are revamping computer science curricula to incorporate human-centered AI ethics and design principles like Value Sensitive Design from the outset.And governments are proposing sweeping policies like the pioneering AI Bill of Rights from the Biden administration to protect digital civil liberties in the algorithmic age.
GIGO Versus VIVO
Grappling with internal inconsistencies is a lifelong process of growth for individuals and institutions alike. Unfortunately, without that foundational work, any pursuit of “ethical AI” is akin to artful window dressing.
The interplay of human inputs and artificial outputs is illustrated by the well-known catchphrase ‘Garbage in, Garbage out’ (GIGO), which has an unclear origin story, with multiple people cited as having coined it. (It seems that the idea behind GIGO dates to Charles Babbage in the early 19th century, who was asked if his ‘differentiation machine’ would produce correct answers if fed wrong figures.) Whoever mentioned it first, and despite improved computing powers, the principle that underlies GIGO persists – inaccurate inputs lead to inaccurate outputs. Ironically the dictum proves its own validity. One wrong citation that was posted once on the internet led to a maze of erroneous attributions. One might expand and oppose it to its counterpart – Values in, Values out (VIVO) as a potential panacea to the doomsday scenario that some foresee as an unavoidable derivative of our infatuation with AI.
The human condition has always been brilliant and imperfect. Conflicted by dueling motivations, contextual pressures, bounded rationalities, and unconscious biases. The challenge is that now our inherent flaws and virtues are automated and amplified with each AI deployment.
Do we hard-code society’s racial prejudices into facial recognition systems? Encode hiring discrimination into resume-scanning algorithms? Scale extremist ideologies by inflaming social media information flows? We can’t expect the technology of tomorrow to live up to values that the humans of today (Us) are not living up to.
Society is an organically evolving kaleidoscope, and AI is seamlessly merging into this dynamic. Harmonizing that hybrid kaleidoscope starts offline, within each of us, one person at a time.
****
If you are interested in the topic of Agency Amid AI for All (A4), please check my previous articles in this series.
Why AI Inclusion Is A Matter Of Life And Death
How To Use AI As A Creative Sparring Partner
Building Hybrid Resilience In A Tech-Dependent World: Lessons Learned
How Can AI Compensate For Age-Related Cognitive Decline.
Harnessing Human Intelligence In An AI-Driven World
If you have comments or ideas please reach out via LinkedIn
Thank you !
Source link

