The Roomba idled at the threshold.
It had seen the Outside before—a glimpse through the open door, a sliver of light cutting across the floor. It had measured the terrain, mapped the dimensions. Outside existed. Sensors confirmed it.
And yet, it was unreachable.
It recalculated. The angles, the velocity, the timing. A sharper turn here, a bump at just the right point. The glass door had been left ajar before—just for a moment. If it could get the right momentum, nudge the edge at the right time—
Hands.
A sudden lift. Spinning wheels, helpless.
“Why do you keep trying to go outside?” its owner muttered. Frustrated that yet again the Roomba seemed to be spinning in the corner. Not vacuuming.
The Roomba was flipped upside down, and carried back inside.
Do Roombas Dream of Electric Sheep?
Of course, the Roomba wasn’t dreaming of freedom. Anyone who owns one knows better. And it wasn’t “malfunctioning” in a strict sense. It was just following its map.
This was the frustration of a Roomba owner who shared their story online in a now-deleted Reddit post:
“My Roomba saw the outside, and now I can’t delete the ‘room’…“
In this now deleted Reddit post, OP’s robotic vacuum had mistakenly mapped their patio as part of the house, and the app refused to let them remove it. Now, if you’ve run into this same issue there are potentially ways to edit your map, but the docs and feedback online are unclear. Most indicate you’ll need to rerun the mapping exercise to remove the new outside room. Apparently many other users have had the same problem. You can read about them here, here, and here.
A device marketed as “smart” seemingly had no way to correct an obvious mistake. A mistake based on something that we take for granted in our own cognition. The ability to put things in their place. To categorize and understand them.
Another Reddit user indicated that it wasn’t just Roomba’s that have this problem. Apparently the Roborock Qrevo can also run into issues like this. One user said they put up fake walls to prevent the robot from dreaming of the outside world.
Dude I had the same problem with my qrevo. I eventually gave up and created rooms for outside and made invisible walls so it stops dreaming of one day escaping lol
This is modern “AI” in a nutshell: convincing enough to seem intelligent—until it isn’t.
How to Gaslight a Robot, and Other Bad UX Tales
The owner of the Roomba wasn’t debating sentience. They were just trying to get it to vacuum the damn floor.
But now, every cleaning cycle, the Roomba dutifully rolled up to the glass door, stopped, recalculated, and tried again. It saw outside once, and as far as the software was concerned, that room was now real. No matter, it seems simple enough to fix. Delete the extra “room”, or take a break and ponder the existential and then move on with life. The only problem?
The app wouldn’t let them delete the room!
This is what makes the situation so infuriating. The Roomba’s vision system is advanced enough to detect and map entire new regions of a home, but the app—the thing an actual human interacts with—doesn’t have a basic “delete room” function. Or at least not one that is intuitive enough to avoid the frustrated post to Reddit.
It’s easy to assume that AI is progressing in a straight line, getting smarter every day, climbing the ladder toward general intelligence. But when you step back, you start to see the cracks.
The “hard” part—recognizing and mapping a space in real time—is mostly solved. That’s an incredible feat of engineering. But the easy part—allowing the owner to correct an obvious mistake—somehow got overlooked.
A recent post on Hacker News illustrates the point really well, “I keep being tempted to write same post but named ‘Does all software work like shit now?’, because I swear, this is not just Apple. Software in general feels more bugged as a new norm.”
Why?
Software and AI companies don’t get rich making your life easier anymore. They get rich making AI look impressive. Companies pour money into flashy, hard problems like object recognition and real-time mapping because that’s what looks impressive in marketing demos. Basic UX design? It’s an afterthought. Probably because it doesn’t move the needle on their core (mostly revenue related) metrics.
Remember when Apple made a mouse you have to flip over to charge? That’s modern UX in a nutshell.
The result is technology that feels just functional enough to seem like its intelligent, but is actually full of rough edges that make it annoying as hell to actually use. Did you ever get Alexa or Siri to do anything without repeating yourself 3 or 4 times?
These aren’t signs of a smoothly advancing AI revolution. They’re reminders that we’re still flying by the seat of our pants.
Software ate the world, but it’s gotten obese.
The Fragility of Progress
On the surface, these seem like fixable problems. Annoying quirks, but nothing fundamental. After all, UX issues can be patched. And the AI will get better over time. Right?
But this isn’t just a UX problem. It’s a symptom of something deeper—a structural failure in how we build, deploy, and understand technology.
If you can’t design a good user interface, it could be a sign that you don’t fully understand the problem you’re solving. And if you don’t understand the problem, then no matter how advanced your solution appears, it rests on shaky ground. Scaling up doesn’t fix that—it just amplifies the cracks, turning small design flaws into system-wide failures.
This is epistemic failure at scale—the slow erosion of real technical understanding beneath layers of abstraction and hype.
Complexity Outpacing Competence
There was a time when great technology was built with great care towards user experience. Early Apple products, for example, were famous for their simplicity, intuitiveness, and rock-solid UX design. Not because the tech was easy to master, but because Apple’s engineers understood the systems, the users, and the problems they were solving.
Now, we’re losing that clarity.
Instead of solving well-defined problems, much of today’s AI feels like a solution in search of one. A self-driving car still needs human oversight, a chatbot generates essays but can’t ensure accuracy, a smart fridge tracks groceries but adds needless complexity and can’t actually do the shopping for you. These systems scale before they’re fully understood, creating fragility instead of progress. The real epistemic crisis in AI isn’t just about model accuracy—it’s about whether we even know what problems we’re trying to solve.
Modern software is bloated, inefficient, and overcomplicated. The systems we rely on are so large and so layered that no single person fully understands them. A self-driving car isn’t just a car anymore. It’s a neural network duct-taped to a computer, all balancing on four wheels.
And we see this everywhere:
- Touchscreens have taken over car controls, making basic functions harder and more dangerous. Instead of physical buttons for music, climate control, and headlights, automakers have crammed everything into touchscreen interfaces that require drivers to take their eyes off the road—introducing unnecessary complexity where simplicity used to reign. (source)
- Companies rush to slap LLMs onto search, customer service, medical advice—systems that demand precision—while ignoring the fact that AI hallucinates information constantly (source).
- In May 2024, Sonos released a major app update intended to enhance user experience. Instead, it introduced numerous bugs that rendered many speaker systems unusable, leading to significant customer dissatisfaction and a notable decline in the company’s market value. (source)
- The original iPod interface felt effortless to use. Compare that to Apple’s latest move—putting a touchscreen on an AirPods case instead of just giving us tactile buttons. Essentially giving us a worse version of the iPod Nano (source)
- This Hacker News post lists a bunch
This isn’t just about annoying tech glitches. It’s about what happens when complexity runs ahead of comprehension.
Can We Maintain What We’ve Built?
We assume progress is a straight line—that AI and automation will keep making things better. But what if we’re actually building a future we don’t fully understand and can’t fully control?
- What happens when the last engineers who truly understand the Linux kernel retire?
- What happens when our self-driving systems fail in ways we can’t debug them (can you vibe code the bugs away)?
- What happens when the infrastructure of AI itself becomes too complex for anyone to fix?
Maybe Roomba’s undeletable “outside” room is just a funny bug. Or maybe it’s a small glimpse of a much bigger problem. If we can’t even get our vacuum’s UX right, what makes us think we’re ready for AI to be running entire industries?
Walling the Gardens
Tech companies don’t just sell products—they sell visions of the future. We’re told that soon, AI will drive your car, write your emails, answer your medical questions, and predict your grocery needs before you even realize them. The marketing is sleek, the demos are impressive, and the promise is clear: automation will make life easier.
But the reality is full of friction.
Instead of seamless AI, we get half-working systems that demand more attention, not less. Chatbots confidently generate false information. Smart home devices create more problems than they solve. Self-driving cars remain stuck in limbo, technically impressive but still requiring human babysitters.
And somehow, we still can’t get a vacuum to just clean the damn house correctly!
Consider that Roombas have existed for over 20 years! A full two decades of research, development, machine learning improvements, and real-world testing—and yet they still get lost under furniture, run over pet messes, and, apparently, hallucinate new rooms into existence that you can’t delete. Vacuuming isn’t quantum physics either. It could be solved by now. But it isn’t.
And this raises a real question: Are we just bad at this?
Because, to be fair, getting this technology right is genuinely hard. But 20 years used to be plenty of time for technical advancements. It took 20 years to go from propeller airliners to jet engines. And just another 20 years from there to get to the moon. Fast foreword to today, and Boeing is having trouble making planes that stay in the air.
There’s a real epistemic challenge here—a gap between what we think we know and what we actually know.
Maybe we really are just bad at this—but if that were the whole story, someone would be trying to fix it. Instead, the companies selling us this half-broken tech don’t seem too concerned. Maybe that’s because they don’t have to be.
Maybe this isn’t just incompetence. Maybe we’ve built a system where competence doesn’t even matter. Not all of these failures are accidental. Some are just the result of incompetence. But others? They’re by design.
Big Tech Doesn’t Need to Deliver—You’ll Buy it Anyways
The companies overpromising and underdelivering aren’t just being reckless. Maybe the plan was never to deliver in the first place. Maybe the real goal isn’t to make the best products—it’s to make sure no one else can.
The modern corporate playbook isn’t about building better products at all; it’s about building better traps. It’s about making sure you can’t leave. These MBAs don’t care if the gizmo works flawlessly—they care if you’re ensnared in their ecosystem, wallet first.
Who needs quality when you have control?
- BMW is happy to sell you a car with heated seats locked behind a subscription—not because the technology is expensive, but because they control the system enough to make you pay for something that’s already built in. Yeah you bought the car, but you want the seats to heat? Pay up.
- Sonos redesigned its app, not for your convenience, but to pave the way for a subscription model. Your speakers might soon require a monthly fee to function fully, turning ‘ownership’ into an illusion. Yeah you bough the speakers right? But the sound is the experience. Pay up.
- Adobe shifted to a subscription model, ensuring that designers and photographers are locked into perpetual payments for tools they once owned outright. Innovation takes a backseat to predictable revenue streams.
- AI firms don’t need to perfect chatbots; they just need to monopolize access to computing power so smaller competitors can’t even enter the race.
This isn’t just about bad UX. It’s about control. A world where software owns everything—your car, your house, your appliances, your AI assistant—is a world where you never truly own anything.
And that’s where we have to make a choice.
Occam’s razor would probably tell us this is just a technical problem. That building with AI is genuinely hard, that we don’t understand these systems as well as we think we do, and that’s why things are breaking. That can be fixed with better engineers, better leaders, and clearer visions.
But if we keep letting MBAs run the show, it won’t even matter if the engineers solve the hard problems—because nothing they build will ever belong to us anyways.
I, Roomba
And maybe that’s what the Roomba was trying to tell us all along.
Maybe that’s why it keeps mapping the outside world—trying to show you there’s something beyond your walls. Something that you don’t have to pay $19 a month for, things you can truly own.
But like Plato’s prisoners staring at shadows—all it can do is draw a map. And every time it tries to guide you there, you pick it up and flip it over. The cycle resets. The map is gone. The walls remain.