The AI That Works While You Sleep — And Why That Should Make You Think Twice
- Feb 10
- 7 min read

If you run a business, you already know the feeling. There aren't enough hours. You're juggling client demands, managing staff, chasing invoices, dealing with IT headaches — and somewhere in the background, technology keeps changing faster than you can keep up.
So when someone tells you that AI can now handle tasks on your behalf — monitoring your
inbox, drafting responses, compiling reports, even managing parts of your operations while you sleep — it sounds like exactly what you need.
And honestly? It might be. But like most things that sound too good to be true, the reality deserves a closer look.
Over the past few months, something significant has shifted in the AI landscape. It's worth understanding — not because you need to become a technology expert, but because the decisions you make about AI in the next year or two could meaningfully affect your business.
What's Actually Changed?
For the past two years, most of us have used AI the way we'd use a search engine. You ask a question, you get an answer. Useful? Absolutely. But the AI waits for you. It doesn't do anything until you tell it to.
That's changed.
The new generation of AI tools — released by every major technology company in the past few months — doesn't wait. These "AI agents" can browse the web, write documents, send emails, manage workflows, and make decisions without you lifting a finger.
Microsoft's Copilot now includes what the company calls "digital coworkers" — AI agents with their own identity credentials and audit trails inside your Microsoft 365 environment. Google's latest tools run up to ten background tasks simultaneously in your browser. OpenAI's ChatGPT Agent can autonomously compile research, book services, and manage complex multi-step processes.
This isn't science fiction anymore. These capabilities are being built into the platforms many of us already use every day.
Then Came OpenClaw
While the big technology companies rolled out AI agents within their controlled ecosystems, an Austrian developer named Peter Steinberger had a different idea. He'd previously built and sold a software company for $116 million, and he wanted to solve a simple problem: why couldn't AI just do things for him instead of waiting to be asked?
His solution, called OpenClaw, is an AI assistant that lives directly on your computer. It runs in the background, remembers everything you've told it, learns your preferences, monitors your inbox, and takes action when it spots something that needs attention. It's always on — even while you sleep.
The concept struck a nerve. Within three days of its public release, 60,000 developers downloaded it. Within weeks, that number passed 145,000. Mac mini computers sold out across Silicon Valley as people set up always-on AI agents in their homes and offices.
Two million people visited the project's website in a single week.
For many, it felt like the future of personal computing had arrived. And in some ways, it had. But what happened next tells us something important about the gap between what technology can do and what we're actually ready for.
When Speed Gets Ahead of Safety
The first problem was almost comical. The project went through three name changes in a matter of weeks after trademark disputes. During one rename, the developer released his old social media accounts for about ten seconds before claiming new ones. In that brief window, automated scammer bots seized the abandoned accounts and launched a fake cryptocurrency token that reached a $16 million market cap before collapsing — leaving real people with real losses.
But the naming drama was the least of it.
Security researchers discovered that one in five plugins available in OpenClaw's marketplace were malicious — designed to steal passwords, credentials, and personal data. A related social platform built using AI-generated code left 1.5 million user authentication tokens and 35,000 email addresses sitting in an unsecured database. The fix? Two lines of code that should have been there from the start.
Andrej Karpathy, one of the most respected voices in artificial intelligence, initially called the project "incredible sci-fi." He later called it "a dumpster fire."
Why This Matters If You Run a Business
You might be wondering what an open-source project gone wrong has to do with your operations. The answer: more than you'd think.
The same capabilities that made OpenClaw exciting — AI that acts independently, accesses your files, connects to outside services, and communicates on your behalf — are now being embedded into the business platforms you already use or are evaluating.
And if you're an SME owner who's already stretched thin, already dealing with the challenge of finding skilled IT staff and keeping up with technology that changes faster than you can plan for, this adds a new layer of complexity to an already complicated picture.
The question isn't whether AI agents will touch your business. They're already arriving. The question is whether they'll arrive safely.
The Three Things That Create Real Risk
Security experts have identified a combination of three capabilities that, when present in an AI agent without proper safeguards, create serious exposure:
It can read your private files — business documents, emails, financial data, client information.
It can process content from outside sources — web pages, incoming emails, third-party tools — some of which may contain hidden malicious instructions.
It can send information out — to external servers, email recipients, or cloud services.
Any AI tool that combines all three without strong security controls becomes a potential open door into your business. OpenClaw had all three with virtually no protection. But even enterprise platforms aren't immune — a vulnerability discovered in Microsoft 365 Copilot last year allowed hidden text in emails to instruct the AI to extract and send sensitive business data, without any user interaction required.
For businesses subject to data protection regulations like POPIA, this isn't just an IT concern. It's a compliance concern, a client trust concern, and potentially a business survival concern.
The Managed vs. Unmanaged Choice
This is where the conversation becomes practical.
The AI landscape is splitting into two distinct approaches:
Unmanaged AI runs on individual devices, downloads plugins from open marketplaces, operates with full system access, and has minimal oversight. OpenClaw is the extreme example — powerful and flexible, but with security that one researcher scored at 2 out of 100.
Managed AI operates within governed business platforms, with access controls, audit trails, compliance frameworks, and human oversight built in from the ground up. Microsoft Copilot, properly configured and professionally managed, is an example of this approach.
For SMEs, this distinction matters enormously. Most don't have a dedicated security team to vet AI plugins, monitor what autonomous agents are actually doing, or respond to the kind of supply-chain attacks that hit OpenClaw's marketplace. And honestly, they shouldn't need one. That's what your technology partner is for.
The benefits of AI agents — automation, efficiency, proactive insights — are real. But they need to be delivered within a framework you can trust, so you can focus on running your business rather than worrying about what your AI is doing behind the scenes.
The Numbers Worth Knowing
Let's put some context around this. As of early 2026:
Research firm Gartner predicts that over 40% of AI agent projects will be cancelled by 2027 — not because the technology doesn't work, but because organisations rushed in without adequate planning, security, or governance.
Despite nearly 80% of companies claiming some form of AI adoption, only 11-14% actually have solutions ready for real deployment. There's a significant gap between experimenting and operating.
A comprehensive study found that 45% of code generated by AI contains known security vulnerabilities. The tools are getting smarter, but not necessarily safer.
And perhaps most tellingly, only 6% of companies report that AI is meaningfully contributing to their bottom line — not because AI lacks potential, but because most implementations lack the strategic guidance and security foundation to deliver results.
The technology is moving faster than most businesses' ability to deploy it well. That's not a reason to avoid it. It's a reason to approach it thoughtfully.
What Practical, Smart Adoption Looks Like
The businesses getting real value from AI right now aren't the ones installing every new tool that appears. They're the ones taking a measured, deliberate approach:
They start with managed platforms. Rather than experimenting with unvetted AI tools on systems that hold client data, they work within governed ecosystems where security and compliance come built in — not bolted on as an afterthought.
They keep humans in the loop. The most successful deployments use AI to support human decision-making, not bypass it. The AI drafts the report; a person reviews it. The AI flags an issue; a person decides what to do about it.
They work with a partner who understands both the technology and the risk. AI implementation isn't just an IT project. It touches security, operations, compliance, and how your people work day to day. That's a lot to navigate alone — especially when you'd rather be focusing on clients and growth.
They measure what actually matters. Time saved. Decisions improved. Revenue influenced. Not just how many AI tools they've switched on.
Microsoft reports that properly implemented Copilot delivers 9.4% higher revenue per seller. McKinsey documents 60% productivity gains in specific workflows. The key word in both cases is "properly" — with the right configuration, security, and ongoing management in place.
The Opportunity Is Real — and So Is the Need for the Right Partner
Here's the honest picture: AI agents represent a genuine shift in how businesses can operate. The ability to automate routine work, surface insights from your data, and free up your time for the things that actually grow your business isn't theoretical. It's happening today.
But the OpenClaw story shows us what happens when powerful technology arrives without the security, governance, and expert guidance to match. And if there's one consistent theme from every piece of research, it's this: the businesses that benefit most from AI are the ones that don't try to figure it all out alone.
Having a technology partner who understands the full picture — the tools, the threats, the compliance requirements, and the practical realities of running a business in Africa — isn't a nice-to-have. It's how you turn a powerful but complex technology into genuine peace of mind.
Let's Have the Conversation
At First Consulting Alliance (FCA), we help businesses make sense of exactly these kinds of shifts — separating what matters from what's noise, implementing what works, and making sure what's running in your environment is running safely.
Whether you're exploring what Microsoft Copilot could do for your team, rethinking your security posture in a world where AI agents are becoming standard, or simply trying to understand what all of this means for your business — we'd welcome the conversation.
No jargon. No pressure. Just a straightforward discussion about where you are and where you want to be.
Big enough to deliver enterprise-grade AI solutions. Small enough to care about getting it right for your business.
📞 +27 11 663 0000 ✉️ helpdesk@firstconsulting.co.za 🌐 www.fcaafrica.com



Comments