AI Ethics, Part 3
Privacy & Surveillance
Almost ten years ago, post-divorce and mid-reinvention, I worked weekends in a craft beer brewery taproom. Not just for the love of beer, mind you, but because staring at a laptop all week writing SEO fluff—“5 Reasons You Need a New Hot Water Heater”—for a content mill in Austin, Texas, was sapping my will to live. My coworkers decided to stage what you might call an intervention for my monastic existence, which largely consisted of ignoring sporadic texts and calls from the ex-wife about things I had zero responsibility for anymore.
Their solution? A setup. She was perfect, they said—liked craft beer, classic hair metal, NFL football, and, most importantly, had cats. Cats! How lucky can a guy get?
So, we met, we ate, we drank, and—shockingly—I even remembered how to be charming-ish. Things were clicking. After dinner, we wound up back at my house. I gave her the grand tour (minus anything remotely “grand”), leading to the one standout feature: an elevated deck in the backyard that wrapped around an old pecan tree. As the sun set, we went up to take in the view, and things started to get... well, steamy.
And then, she froze. Staring down at the house, she whispered, “There’s someone in your backyard.”
Sure enough, crouched under a window was a shadowy figure peering into it.
“Hey!” I yelled “Hey! What the hell are you doing?”
The figure straightened up, and though I couldn’t see the face, the voice was unmistakable. “I’ve made a terrible mistake!” my ex-wife blurted out before bolting from the yard. Seconds later, a car door slammed, and her black Volvo sped off into the night.
My point?
That night on the deck wasn’t just a memorable dating disaster—it was a textbook case of traditional privacy and surveillance.
My date and I went up to the deck for some privacy: We weren’t broadcasting our lives to the world, and our business was ours to keep. On the flip side, my ex-wife’s surveillance was also deliberate—she wanted to see what we were doing and acted on it. Both concepts—privacy and surveillance—had intent, involving conscious decisions by all parties.
That’s how these ideas have been traditionally understood: Privacy was about keeping your business to yourself by default unless you chose to share it, and surveillance was a deliberate act of spying, often tied to suspicion, accusations, and notions of justice. But in today’s world, privacy and surveillance are very different—what used to involve deliberate actions is now automated, with AI supercharging this erosion of the private and public spheres.
Privacy now isn’t just about what you choose to share or hide—it’s about the endless stream of digital breadcrumbs you leave behind without a second thought. Every click, search, and cookie—or walk past a CCTV camera—creates a footprint accessible to companies, governments, and anyone else with the money to buy it. Privacy isn’t just eroded; it’s splintered into countless granular data points. What once felt personal—your location, your purchases, your late-night Netflix binge—is now public by default because you’ve unknowingly left yourself out in the open for anyone who wants to look.
As for surveillance, it’s no longer a deliberate act of spying—it’s an endless, automatic process of hoovering up everything about everyone, everywhere, all the time. AI doesn’t care if you’ve done something wrong or if you’re entirely innocent; it’s not playing detective with a specific crime in mind. Instead, it’s storing all your habits, preferences, and movements, continually reanalyzing and repurposing them.
And to what end? Anything and everything: making money, creating targeted ads, flagging potential criminals, or fueling whatever agenda the data buyer has in mind—good, bad, or downright dystopian. Unlike traditional surveillance, which had limits—scope, purpose, even morality—AI surveillance is passive, boundless, and relentless.
This is where privacy and surveillance collapse into each other: The more your private life is exposed, the more data there is for AI to churn, turning the simple act of existing into raw material for someone else’s gain.
In this week’s AI Yi Yi!, we’ll look at tools that can help you claw back a measure of privacy in a world that’s intent on selling your every move. Then, we’ll dive into the rise of surveillance tech. Finally, we’ll cut through the AI BS to talk about “surveillance capitalism”—the not-so-accidental business model that drives all this.
Tools You Can Use in an AI World: DuckDuckGo
Like most everyone, I’d like to think I’m relatively smart about my online behavior, blocking popups by default and trying to avoid enabling cookies whenever possible. Sure, a cookie might sound like a harmless snack, but in the digital world, it’s more like a sticky note left on your computer to remember what you did there—helping you stay logged in or saving your shopping cart.
At first glance, that doesn’t sound so bad.
Then again, a quick look at the cookie data in the Safari browser on my iPhone tells a very different story because not all cookies are created equal. There they are in black and white: Facebook.com, DoubleClick.net, BlueKai.com, and a whole host of names I don’t recognize—BounceExchange, Adnxs, and ScorecardResearch, just to name a few.
The real troublemakers, like Facebook and DoubleClick, are designed to follow you across the internet, logging your activity, building detailed profiles, and selling that data to whoever’s paying. It’s the difference between a helpful assistant who remembers your coffee order and a nosy neighbor who takes notes on your every move—and then sells those notes to advertisers, data brokers, and anyone else with an interest… and the money to buy them.
Altogether, there are well over a thousand cookies quietly tracking me on my iPhone, each one a silent witness to my clicks, searches, and scrolls. I have no idea when I consented to most of them, let alone what they’re doing with my data, but they’re there, quietly feeding into a massive system designed to monitor, analyze, and monetize my every move.
So if you’re tired of having your digital shadow followed online, your first best bet is to use the DuckDuckGo browser. Unlike other browsers that see you as a walking data point, DuckDuckGo takes a “don’t ask, don’t tell” approach to your privacy: It doesn’t track what you search, where you go, or what you do, which means no creepy ads following you around like a lost puppy and no personal profiles being built behind your back. It’s a browser that works for you, not for advertisers, and it does it without making you jump through hoops or trade convenience for security.
Here’s how DuckDuckGo delivers on its promise of putting your privacy first:
Stops Trackers in Their Tracks: Blocks third-party trackers by default, keeping Facebook, Google, and others from monitoring your activity across the web
No Personalized Data Collection: Doesn’t store your search history, location, or any personal data—every search is treated as anonymous
Avoids Filter Bubbles: Shows unbiased search results based on your query, not a profile built from your past activity
Built-In Privacy Tools: Enforces encryption, so you don’t need extra plugins or settings tweaks
DuckDuckGo makes money the old-fashioned way: through contextual ads tied to your search terms, not your personal data. Search for "hotels in Waco," and you might see ads for booking sites, but unlike Google, DuckDuckGo doesn’t create a profile about you—it simply shows ads based on what you’re looking for right now, no stalking required. And while other browsers quietly monetize every click and cookie, DuckDuckGo isn’t just a browser; it’s a way to take back control in a world that’s constantly trying to profit from every online move you make.
AI in the Wild: Clearview AI
Working remotely has its perks. I’ve got a shiny MacBook, two big monitors, and WWOZ playing in the background all day—except from 9 to 11 am during its traditional jazz show, which is a little too old-school for my hard-bop-loving soul. But one downside of the work-from-home life is being the de facto porch concierge for every package my girlfriend orders. No knock at the door, no signature—just a steady stream of boxes unceremoniously dropped off.
One afternoon, though, the doorbell actually rang, which was odd enough. But what I found after opening the front door was odder still: six Waco cops standing in my yard, with three squad cars lined up on the street. My first thought? “How’d they find me, and what took so long?” But the story was much simpler—albeit a little weird.
Our neighbors across the street have a Ring doorbell with a camera that saw a guy walk by, swipe a package off our porch, and keep going. The neighbors weren’t even home—they saw it happen remotely and called the cops. As luck would have it, there was a patrol car nearby, and within minutes, they’d found the guy, package in hand, and arrested him. Justice, courtesy of Big Tech and a well-placed doorbell camera.
After the porch pirate incident, my girlfriend decided it was time to get a Nest doorbell cam of our own. She liked the idea of having an eye on the front yard, whether it was spotting deliveries or catching sight of the occasional stray fox in our neighborhood. When I mentioned privacy concerns—like the fact that a doorbell cam doesn’t just see your porch but anyone who happens to pass by—she shrugged it off.
“If you don’t have anything to hide,” she asked, “why would you be worried?”
On its surface, that’s a fair point, I guess. But who else might be watching through that camera—and what are they doing with what they see?
The thing about doorbell cameras like Nest and Ring is that while they’re great for keeping watch on your porch, they also upload a constant stream of images to the cloud. Whether it’s your face on a doorbell cam or a selfie on Instagram, those images have the potential to become raw material for companies like Clearview AI.
Using facial recognition and AI-driven tracking, Clearview AI has built a massive database capable of identifying just about anyone. How? By scraping billions of publicly available images from sources like social media platforms (Facebook, Instagram, LinkedIn), public websites, news articles, and even mugshot databases—often without the consent of the individuals depicted. Surveillance on this scale is no longer just about spotting a porch thief; it’s about using every captured moment to build a system that sees all of us.
Who’s using Clearview AI? Its customer base reads like a who’s who of law enforcement and private security, with agencies around the globe eager to tap into its massive facial recognition database. But according to internal documents reviewed by BuzzFeed News, it doesn’t stop there—private companies like Walmart and organizations like the NBA have used Clearview AI’s technology, and even individuals have gained access. This raises serious questions about oversight, accountability, and the unchecked spread of surveillance technology into everyday life.
Clearview AI eagerly highlights its “success stories”—identifying January 6 Capitol riot suspects, helping law enforcement catch child predators, even exonerating the wrongfully accused. And sure, those are wins most people can get behind. But like my girlfriend’s “if you’re not doing anything wrong, why worry?” logic, these stories sidestep a bigger issue: When a tool this powerful is in the hands of thousands of people, from local cops to private security firms, how long before it’s used for something far less noble?
And in the bigger picture, Clearview AI’s legal record is about as spotless as a smudged Nest doorbell cam. The company has been fined millions in the EU and UK for violating privacy laws like the General Data Protection Regulation (GDPR) by scraping biometric data without consent. In the US, Illinois has led the charge with lawsuits under biometric privacy laws, raising serious questions about whether its business model can survive in a world where personal data rights are finally being taken seriously.
Clearview AI isn’t alone—it’s part of a rapidly growing industry where companies like PimEyes, Cognitec, and SenseTime are racing to expand their databases of faces and behaviors. But even if stricter privacy laws are put into place, who decides what counts as "misuse"? What you see as a breach of privacy might be a public safety tool for the FBI or a business opportunity for Walmart. As long as the line between protection and exploitation remains blurry, we’re all just hoping it’s not crossed by a potentially bad actor—whether that's a hacker, a government agency, or the next Clearview AI customer.
Cutting Through the AI BS: Surveillance Capitalism
Still think nobody’s paying that much attention to you online? Take a look at your spam email folder. It’s a digital junk drawer stuffed with evidence of how your data is being used—and abused.
Mine is a jumbled mess of scams, phishing attempts, and scattershot marketing that managed to bury two legitimate emails from X about logging into my account from my iPhone. Among the junk: “CASH-APP $ Payment Confirmation” promising $1,000 I’ll never see, a sob story from "Mrs. Kim Kumalo" about her inheritance, and even “The 4 worst blood pressure drugs” paired with an article claiming “Sleeping in THIS position is linked to Alzheimer’s.” Sprinkle in fake rewards like “Congratulations! You can get a $50 FedEx gift card!” and a few “Pre-Black Friday Concert Ticket Deals,” and you’ve got a snapshot of how digital breadcrumbs are turned into bait for profit.
Your spam folder may look like chaos, but it’s a perfect example of the surveillance capitalism Shoshana Zuboff warned about in her 2015 article, “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization.” It’s the not-so-secret sauce behind much of Big Tech’s business model, where companies like Google, Facebook, and Amazon monetize user data through targeted advertising, behavioral profiling, and personalized recommendations aimed at maximizing engagement and spending.
So, while spam emails contain a multitude of wildly divergent messages, they’re far from random—they’re just surveillance capitalism in action. By collecting, analyzing, and repurposing everything from your clicks to your selfies, companies transform your private life into a profit center. Zuboff’s work revealed how this data-driven economy doesn’t just predict your behavior—it actively shapes it, creating a feedback loop that locks you into a cycle of endless consumption, all while making tech companies unfathomably rich.
What’s even more unsettling? Zuboff’s 2015 analysis didn’t factor in AI. Fast forward to today, and her warnings about the “Big Other”—a pervasive surveillance infrastructure that commodifies personal data—feel eerily prophetic. AI supercharges the surveillance capitalism she described, weaponizing data at a scale unimaginable a decade ago: facial recognition systems identify us in crowds, predictive algorithms anticipate our next moves, and generative AI reshapes our information landscape. What Zuboff identified as a data extraction system has evolved into a sprawling AI-driven ecosystem where the line between observation and manipulation has all but vanished.
To tackle these pervasive data surveillance structures, switching to a new web browser isn’t enough. Sure, tools like DuckDuckGo can block trackers and keep your searches private, but what about the mountains of data already out there—your personal details scattered across data brokers and people-search sites?
That’s where Incogni comes in.
As a personal data removal service, Incogni automates the process of eliminating your information—email address, phone number, and home address—from public websites and private databases. By sending regular removal requests to data brokers, Incogni reduces spam, minimizes identity theft risks, and makes casual background checks harder to pull off. Plus, you’re less likely to see those creepy targeted ads that feel like your phone is eavesdropping. And the best part? Incogni handles the legwork—no hunting down forms or spending hours navigating opt-out processes. It’s not perfect, but it can make your digital life feel a lot less exposed.
Before you imagine a life of complete digital invisibility, however, let’s manage expectations. Incogni won’t erase public records like court filings or voter rolls, and while it limits hyper-targeted ads, generic ones will still find you. Plus, data brokers don’t always play nice—compliance with removal requests can take weeks or even months, and some may refuse outright. Think of Incogni as a powerful tool but not a magic wand. It’s a piece of the privacy puzzle, not the whole picture.
Beyond its core service, Incogni also provides valuable resources through its blog. From guides like the "TruthFinder Opt-Out & Data Removal Guide" to tips on "How to Stop Spam Emails," it offers actionable advice to help you navigate the complexities of digital privacy. As a whole, this information empowers you to reclaim your data in an increasingly invasive digital world.
Look, surveillance capitalism isn’t going away and you’ll never have total online anonymity—those ships have long since sailed—but tools like Incogni and the ability to use them effectively give you the chance to push back. The more we chip away at the layers of data brokers and opt-out forms, the more we reclaim a sliver of our digital autonomy. Because if we don’t draw the line, who will?
Thinking back to that night on the deck with my ex-wife lurking in the backyard, it’s clear that my ideas of privacy and surveillance were much simpler then. Privacy meant keeping my life my own, and surveillance was a deliberate act—someone intentionally focusing on me to see what I was doing. But now, it’s no longer about who’s peeking through the blinds; it’s about a system that sees everything, everywhere, all the time. And the worst part? You don’t even realize it’s happening.
So where do we go from here? How do we balance the convenience of modern technology with the need for personal boundaries? Is privacy becoming a luxury reserved for the tech-savvy or well-informed? What’s your take on surveillance capitalism—are tools like Incogni enough to push back, or is the problem too vast for individual action to make a real dent? Share your thoughts in the comments, and let’s discuss.
Next week, we’ll wrap up this AI ethics series with a deep dive into transparency and accountability. We’ve spent the past three weeks exploring the basics of AI ethics, from questions of bias and fairness to the dynamics of privacy, surveillance, and the forces reshaping our digital lives, but what happens when things go wrong? Who takes responsibility, and how do we ensure AI systems are held to account? It’s a critical conversation we can’t afford to skip—and I’ll see you there.
Find this issue of AI Yi Yi! thought-provoking? Don’t keep it to yourself—share it with your friends, colleagues, or anyone else curious about the evolving landscape of AI, privacy, and surveillance.
Mark Roy Long is Senior Technical Communications Manager at Wolfram, a leader in AI innovation. His goal? To make AI simple, useful, and accessible.




