You know the kind of day I’m talking about. Big job interview coming up, girlfriend sick, so I spend the night tossing and turning on the couch. Wake up with a neck so stiff I can barely move, only to smash my head on the car door frame because bending over far enough to get in is currently beyond me. Spring allergies? Check. No AC in my old Maverick? Double check. Stuck in crosstown traffic, sweating like I’ve been trapped in a Texas summer my whole life? You know it, guy. By the time I park, I’m half a step away from unraveling.
You know the kind of interview I’m talking about. I want to sound confident without displaying overt desperation, so my answers bounce between watered-down versions of what I’d like to say and whatever I think the two people across from me—the managing editor and her assistant—want to hear. Spoiler: It isn’t going great. I’m too obviously hedging every response, walking the tightrope between “actual hireable professional” and “volunteering to sacrifice myself on the altar of capitalism,” all for the promise of enough money to scrape by at a job that I can finally say uses my English degree.
Then, it happens.
At the end of my desultory interview, the managing editor leans forward and says, “You know, apart from everything else we’ve talked about, I just have to tell you something.”
“Okay,” I reply, trying to sound hopeful.
She smiles—too broadly. “I have to say, you look exactly like the first man I ever fell in love with.”
Her assistant smirks. I blink. What does this even mean? Where is this going? After an awkward pause, I manage another, “O-kay...”
Then she drops it: “My ex-husband.”
My brain short circuits as she launches into the lengthy backstory. They met in college in Wisconsin, he was a Communist organizer on campus, they fell madly in love, fell less madly out of love, and after divorcing, remained best friends.
“See?” she concludes cheerily, “It’s not a bad thing at all!”
Meanwhile, I’m already chalking up the interview as a lost cause and imagining going home, downing a handful of ibuprofen to soothe my stiff neck and broken brain, and collapsing back on the couch.
It was one of those days.
Two days later, they offered me the job, and the next week I started to work.
Go figure, right?
The truth is, I’ll never know why I got that job. Was it something I said during my lackluster interview? Maybe my résumé? Was it the fact that I managed to spell “Cincinnati” correctly on the pre-interview skills test? Or did my accidental resemblance to someone’s ex-husband somehow tip the scales?
The process was a black box—a murky mix of unknown criteria and personal judgments I had no insight into. And honestly, isn’t that how it feels for all of us at job interviews? You show up, do your best, and then the decision gets made by The Great Oz behind a curtain, leaving you to wonder what did and didn’t matter.
Now take that same mystery—the black box of human decision-making—and pair it with AI tools. AI itself isn’t biased; it’s just ones and zeroes. But here’s the kicker: Whoever writes that code, or the institution they work for, brings their own implicit biases to the table, whether they want to admit it or not. These AI tools don’t invent unfairness—they learn from the data we feed them, data riddled with messy, human contradictions.
I’m sure the managing editor who hired me thought her decision was fair, despite dropping that bizarre ex-husband comment. But do any of us fully recognize and account for all the factors—right or wrong, good or bad, sensible or nonsensical—that lead us to make the decisions we do? Probably not. Now imagine an algorithm, trained on flawed human behavior, making those decisions at scale.
That’s where the real trouble begins.
Case in point: In 2014, Amazon decided to streamline its hiring process. Manual résumé screening? Too slow. Human recruiters? Too subjective. The solution? Build an AI tool with an algorithm to slap a one-to-five-star rating on candidates faster than next-day shipping. Sounds great, right? Speed up hiring, weed out bias, and find top talent with ruthless efficiency.
And Amazon wasn’t alone—this was part of a broader tech fantasy that AI could do what humans couldn’t: make big, complicated institutional decisions more productive, fair, and consistent. After all, if you could trust AI to identify your perfect toaster at Amazon, why not use it to find your next VP of Sales?
Amazon's AI hiring experiment quickly turned into a cautionary tale. By 2015, the company discovered that its shiny new algorithm had a glaring flaw: It didn't like women. The AI was trained on a decade's worth of résumés, predominantly from male applicants, reflecting the tech industry's gender imbalance. Consequently, the system penalized résumés that included the word "women's," as in "women's chess club captain," and downgraded graduates from all-women colleges.
Amazon tried to adjust the algorithm to be neutral to these terms, but the AI continued to find ways to favor male candidates. Realizing the depth of the issue, Amazon scrapped the project, acknowledging that their attempt to eliminate human bias had, ironically, automated it instead.
So you might think, okay, this was an issue ten years ago. The problem has to be solved by now, right? I’ve got bad news for you: It hasn’t.
Fast forward to 2024, and AI job recruiters are still ghosting qualified candidates based on gender and ethnicity. The culprit? Training data that still reflects workplace inequalities, feeding algorithms the same bad patterns we’ve been trying to break for decades.
Sure, companies are throwing buzzword-filled solutions at the problem—“bias mitigation strategies” like tweaking data or slapping penalties on discriminatory outputs. But let’s be honest: These are band-aids on a gaping wound. Until we fundamentally rethink how we build and use AI in hiring, all we’re doing is giving old prejudices a shiny new interface.
Where does that leave us? AI hiring tools like Amazon’s were supposed to fix everything—streamline hiring, eliminate bias, save the day. Instead, they’ve proven that bias doesn’t disappear when you hand it to a machine—it just scales up, turning into what’s called “bias amplification.” And while the tech world keeps hyping quick fixes, the truth is we’re still stuck with the same confounding systems.
In this week’s newsletter, we’ll pull back the curtain on how AI tools are still getting it wrong, what tools like Gender Decoder are doing to help, and why ethical governance isn’t just a buzzword—it’s the only way forward.
Tech Tools You Can Use: Gender Decoder
Growing up in small-town Texas in the 1970s, prejudice wasn’t something anyone talked about. You didn’t have to: It was always there in the background—like dusty high school football fields and Baptist potlucks.
That changed for me one night when my mom and I watched Giant, the old Rock Hudson and James Dean epic set in West Texas and released in 1956. I was a teenager, more interested in watching a classic film than thinking about race and class divides, but my mom—always more tuned in—made sure I was paying attention. She didn’t just want me to see the movie; she wanted me to see everything it was saying.
When the credits rolled at the end, she asked me what I made of it. I liked it—after all, Giant’s a classic for plenty of reasons—but there was one scene near the end that didn’t make sense to me: when Rock Hudson’s rich rancher character is beaten up for telling a white diner owner to serve a Mexican family. I told my mom I didn’t get it because Mexican people can “obviously” eat wherever they want.
My mom was happy yet horrified. Yes, she was glad I didn’t display the implicit prejudices that were everywhere in our town, but she also wanted me to see the reality I’d missed in our own family. She connected the dots I’d never noticed: family tensions over my aunt marrying a Hispanic man, their long absences from family Christmas gatherings due to scheduling “conflicts,” and the way my grandparents only warmed up after my cousins were born. The lesson? Bigotry and bias aren’t just loud, hateful moments. They’re also quiet, systemic, and most often invisible to the people who aren’t directly affected.
Systemic bias doesn’t always show up as a diner owner refusing service—it’s often woven into the everyday details we overlook, like the language we use. In hiring, for example, that language shows up in job descriptions: the seemingly harmless words and phrases that subtly signal who does or doesn’t belong. Fixing systemic bias can feel overwhelming—like trying to dismantle a skyscraper with an Ikea screwdriver. But the first steps can be small, such as rewriting one job description at a time.
That’s where a tool like Gender Decoder comes in, ready to flag those subtle bias cues before they quietly tell qualified candidates, “This job isn’t for you.” It scans job descriptions for gender-coded words—like a spell check for bias. Masculine-coded terms such as “dominant” and “competitive” or feminine-coded ones like “nurturing” and “supportive” subtly shape who feels welcome to apply. An ad calling for “driven leaders who dominate the market” might as well flash “men only” in neon. Gender Decoder cuts through this noise to provide a clear blueprint for rewriting ads that invite everyone—not just the usual suspects.
And it’s not just a buzzy tool—it’s backed by hard data. Gender Decoder’s creation is grounded in research, specifically a 2011 study titled “Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality.” The study found that job ads in male-dominated industries like engineering were packed with masculine-coded terms such as “ambitious,” “assertive,” and “determined.” By contrast, nursing roles leaned on feminine-coded words like “committed,” “compassionate,” and “understanding.”
The result? Even when fully qualified, women are less likely to apply to roles filled with macho phrasing. It’s not about malicious intent—these biases reflect cultural defaults baked into workplace norms for decades.
Still think this is a fringe issue? Think again. ZipRecruiter found that industries like business (94%), tech (92%), and engineering (92%) are practically drowning in gendered language. Funny how these same industries are constantly going on about their diversity problems, right? Here’s a clue: You can’t diversify your workforce when your job ads are quietly telling half the talent pool they don’t belong.
Look, using inclusive language isn’t about being “woke”—it’s about being smart. Gender-neutral wording doesn’t scare men off, but it does bring women and underrepresented groups into the fold. And let’s be real: Diverse teams aren’t just good for PR—they’re good for profits. Study after study shows they’re more innovative, make better decisions, and outperform their less-diverse counterparts.
Fixing hiring bias doesn’t always have to be rocket science—sometimes it’s about making smart, simple tweaks. Tools like Gender Decoder make it easy to take that first step: reworking one job ad at a time. Small changes build momentum, and before you know it, you’re creating stronger, more diverse teams that perform better on every level.
AI in the Wild: Ethical Technology Governance
When I first moved to Waco, Texas, I spent much of my first week looking for a house to rent without any luck. Either they were too expensive or too scary or too close to a house that was too scary. Finally, I found a place on the south side of town that would work. The landlord met me there—a woman in her seventies with tightly curled hair and a confidence that came from decades of saying exactly what was on her mind. She gave me a quick tour inside, then we stepped out onto the back patio to talk.
“You know,” she said, gesturing toward the yard, “I raised my kids in this house back in the '50s and '60s, but we had to move when they integrated the schools.” Then she leaned in and whispered, “I wasn’t about to have little Black and brown kids sitting next to my kids.”
She smiled kindly at me, waiting for me to agree.
I didn’t know what to say. I needed a decent place to live, so I just stood there, though I wanted to blurt out some imaginary story about a Black girlfriend and our biracial kids. But I didn’t.
She broke the silence and continued, “You seem like the right kind of guy.” Still smiling, she handed me the keys on the spot—no deposit, no credit check, not even a request for ID.
I knew exactly what she meant: I was the “white” kind of guy.
This wasn’t loud, obvious prejudice, but the bigger issue is that individual biases, like my landlord’s, scale into systemic inequities when embedded in algorithms.
This isn’t theoretical because AI in housing has been caught red-handed: tenant screening tools that rely on eviction histories riddled with bias, mortgage algorithms that charge Black and Hispanic borrowers higher interest rates than equally qualified white applicants, and even Facebook’s AI that resurrected redlining by letting landlords exclude whole demographic groups with a click.
But it doesn’t stop at housing. The same ingrained inequities have crept into other critical private and public systems, shaping decisions far beyond who gets approved for a lease. AI can now determine everything from who’s flagged as a “prolific offender” by police to who qualifies for life-saving healthcare. Algorithms trained on biased data have become gatekeepers to opportunity—or barriers to survival—depending on which side of the system you’re on. The stakes aren’t just personal anymore; they’re societal.
And while companies happily promise to “self-regulate” their AI tools, history shows us how well asking the fox to guard the henhouse goes. That’s why external governance isn’t optional. The only question is whether we’ll steer our future toward fairness… or not.
To address these challenges, policymakers need proactive tools designed to keep pace with the rapid evolution of technology. A Playbook for Ethical Technology Governance, published by the Institute for the Future, offers exactly that—a framework for balancing innovation with equity by using strategic foresight to anticipate risks, mitigate harm, and maximize benefits before AI spirals out of control. Rooted in core democratic principles, the playbook guides governments through the complex realities of tech governance with a clear focus on fairness and accountability.
Users work through different situations resulting from the failure of technology, and the playbook’s first scenario, “Algorithmically Accused,” is one we’ve already seen in real life: A county rolls out facial recognition software to ID suspects. Cutting-edge, right? Until it misidentifies a Black resident, leading to his arrest, hours of interrogation, and a PR firestorm. Activists demand a statewide ban, calling the system invasive and biased.
Your job in this scenario? Draft guidelines to rebuild trust, prevent future disasters, and convince the public the system isn’t rotten to the core. To do this, you’ll rely on a decision tree: What negative outcomes could follow (lawsuits, PR disasters, or worse)? What positive changes are possible (salvaging public trust, improving safety)? And how could this train wreck have been avoided?
Finally, use the playbook’s open-ended questions to get your team thinking: How do you vet private algorithms for fairness? Should AI outputs include disclaimers before they reach a courtroom? And at what point does banning AI systems outright become the only ethical choice? The goal isn’t just damage control—it’s building a system that puts public safety and fairness first, using foresight to keep future disasters off the docket.
A Playbook for Ethical Technology Governance is a critical step toward ensuring that technology serves society instead of steamrolling it. Its greatest strength lies in its proactive framework: It equips governments and organizations with the tools to anticipate risks and weigh ethical trade-offs before tech disasters happen, but it doesn’t make the tough calls for them—it’s a guide, not a shortcut.
As the playbook’s co-author Jake Dunagan and futurist Stuart Candy put it, “Foresight without ethics is diabolical. It is a speedboat without a rudder, plowing through everything in its path.” The playbook’s here to slap a rudder on that speedboat—but policymakers have to steer. The goal? Make sure AI systems—whether in housing, policing, or beyond—serve equity instead of bias.
Cutting Through the AI BS: Bias Amplification
My dad, who has a PhD in the metaphysical poetry of Andrew Marvell, styled himself in the 1970s as an intellectual redneck philosopher at the ag college where he taught in rural Texas. He was full of “witticisms” like, “If you can’t say something funny and mean, then just say something mean.”
Another of his favorites was, “Things are fair if they’re equally unfair for everyone.” He’d laugh when he said it, but he wasn’t joking. That saying captures how many people think about fairness—not just in life but in systems too. The logic feels airtight at first glance: If everyone’s subject to the same rules, doesn’t that make the system fair?
But what feels like common sense at the dinner table can go off the rails when applied to complex systems—especially those powered by AI.
The problem? Fairness doesn’t work like that in real life. Systems aren’t built in a vacuum—they’re fed data straight from our convoluted, unfair world. When AI applies those rules “equally,” it doesn’t erase bias; it amplifies it. That’s bias amplification in a nutshell: Algorithms take hidden prejudices and scale them into automated injustices. It’s the ultimate illusion of fairness—everything looks neutral on the surface while inequities quietly entrench themselves underneath.
Take facial recognition technology as an example. On paper, it sounds like the perfect equalizer: Everyone’s face gets scanned, and the algorithm treats everyone the same. But studies show these systems misidentify Black and Asian faces at far higher rates than white ones. The rules are applied equally, sure—but they’re flawed rules built on biased data. That’s not fairness; it’s bias on steroids.
Unfairness isn’t the only problem—it’s also about accountability. When facial recognition systems make mistakes, who’s responsible? Too often, the blame is passed around: Vendors claim it’s user error, police say they’re just following the tech, and developers hide behind the code. This game of hot potato leaves individuals—often from marginalized communities—facing wrongful arrests or worse, while no one steps up to fix the underlying flaws. Algorithms don’t exist in a vacuum; they’re created, implemented, and used by humans. Treating them as infallible shifts agency away from people and locks errors into systems that are hard to fix.
The AI industry loves to claim that systems are “fair” because they’re applied uniformly, but uniform rules often entrench inequities instead of erasing them. My dad’s saying proves the point: being “equally unfair” doesn’t make a system just. True fairness isn’t about spreading inequity equally; it’s about dismantling the systems that perpetuate it.
So what’s the solution?
As expected, the AI industry continues to pitch self-regulation as the solution, but here’s the reality: A year after pledging responsible AI practices to the White House, many tech giants have little to show beyond vague mission statements. Meanwhile, the National Institute of Standards and Technology (NIST) points out that AI bias isn’t just a coding problem—it’s a people and systems problem that can’t be fixed with a few tweaks. Without external oversight to hold these companies accountable, we’re not just spinning our wheels; we’re paving the road to even deeper systemic inequities.
As for me, my job interview where I was a dead ringer for a Communist organizer ex-husband still haunts me on occasion—but it’s a good example of how easily bias can slip into systems on a micro level. (And don’t even get me started on the teaching interview where an assistant dean casually said, “So, Mark, tell us about Jesus.”) Whether it’s the questions we ask in hiring, the language in job ads, or the algorithms shaping public policy, fairness doesn’t happen by accident. It takes deliberate effort and, let’s be honest, a lot of uncomfortable conversations. The bigger question is: Are we willing to do the work?
As we close, let me turn it over to you. What biases have you noticed in everyday systems—whether in job postings, tech platforms, or your workplace? How do we balance innovation with accountability? And here’s the big one: If you had the power to enforce one rule for ethical AI, what would it be? Drop your thoughts in the comments—I’m curious to see how you’d steer the future of AI.
Next week, we’re tackling another thorny topic: privacy and surveillance. From smart devices that are way too nosy to government systems turning “public space” into 24/7 monitoring zones, AI is rewriting what privacy means. Buckle up: There’s a lot more at stake than just your browser history.
AI is shaping policing, hiring, housing policies, and everything in between. Share this post with anyone who needs to know how bias amplifies injustice—and what we can do about it.
Mark Roy Long is Senior Technical Communications Manager at Wolfram, a leader in AI innovation. His goal? To make AI simple, useful, and accessible.