Everyone's asking about agents. What are they? How do they work? Will they actually help, or just create new problems to manage?
An agent is a system that can take a goal and work toward it, step by step, without waiting for your input at every turn. How well it works depends less on the technology and more on the business problems you’re trying to solve.
However, not every workflow benefits from an agent.
According to MIT’s Project NANDA study, many enterprise AI pilots deliver no measurable return despite companies pouring $30-40 billion into generative AI investments. Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027, citing “escalating costs, unclear business value or inadequate risk controls.”
The problem isn’t necessarily the technology. It’s the workflows these systems are being applied to.
AI agents only create value when the underlying workflow actually supports them. Some processes are naturally suited for agents, while others create complexity without delivering real gains. In the workflows that we have delivered whether it’s Kids Help Phone’s AI conversations handling sensitive conversations with at-risk individuals or a sales funnel information providing agent for Telus Digital, delivering real results share a common set of structural conditions that make agents viable.
Before building anything, run your workflow against the list below.

The Right Conditions
It's almost automatable
There's a pattern, but it still needs some judgment. A human is bridging gaps between systems, interpreting edge cases which deviate from the norm but follow recognizable patterns or making low-stakes calls that follow a general logic. Customer service inquiries, invoice processing, data entry, report generation will come up frequently.
The goal is defined
"Make things better" isn't a goal. "Reduce claim processing time by 20%" is. If you can't articulate what success looks like, you can't measure whether an agent got there. There's tolerance for iteration. Agents aren't plug and play. They need tuning, feedback, and course correction. If your organization expects it to work perfectly on day one, you're not ready. For example, with the marketing agent for Sunoco, we fully expect to refine and adjust the campaign briefs it generates over time. That iteration is part of the process, not a failure of it.
The data is structured and accessible
If your data lives in scattered PDFs and your systems don't talk to each other, the agent has nothing to work with. Structured data, accessible APIs, clear integrations are prerequisites. Even when agents interact through natural language, they rely on structured context behind the scenes. For Waytrade’s operational agent, this means freight data, COAs, market prices, trades, and bids must be standardized, connected, and queryable through APIs or unified platforms. Without this structure and integration, the agent cannot accurately retrieve, compare, or reason across datasets which would in return make reliable answers impossible.
There's enough volume
The work shows up often enough to justify the investment. At very low volumes, the overhead of building, maintaining, and improving an agent outweighs the benefit. As volume increases, the efficiency gains compound, making the case stronger. It’s not just about scale. It’s about whether the time saved and consistency gained make the effort worthwhile.
The patterns recur
Agents thrive on repetition. One-off, highly contextual tasks that require fresh judgment every time? Not a fit. Predictable patterns with clear inputs and outputs? That's where they shine. Telus Digital’s sales funnel providing agents are well suited to tasks like responding to inbound lead queries, pulling standardized data such as pricing tiers, product specs, case studies, and availability to deliver consistent, accurate answers. Each interaction follows a repeatable pattern: identify the lead’s stage and intent, retrieve the relevant information, and respond, making it an ideal use case for automation.
The rules can be written down
If the logic is documentable, an agent can follow it. If it's all gut calls with no discernible pattern, automation becomes fragile. Agents can handle nuance, but there needs to be logic to that nuance. For Kids Help Phone, where conversations are with at-risk individuals, the agent’s job isn’t to resolve the situation; it’s to hold the space until a human can. Responses have to match the level of risk, such as emergency services for acute situations, peer resources and support networks for someone who is struggling but not in immediate danger.
Where agents shine
Scenarios which are practically built for agents are repetitive, data-heavy and pattern-based. Scenarios like:
High-volume data analysis. Pattern recognition across thousands of records. Anomaly detection. Classification at scale. Work that would bury a team, but agents handle well.
Prediction and forecasting. Demand planning, production scheduling, resource allocation, and inventory management. Historical patterns informing future decisions.
Continuous monitoring. Watching data streams, acting on thresholds. Fraud detection. Compliance monitoring. Quality assurance. Background processes that catch things before they escalate.
Industries already leaning in: Healthcare, finance, marketing. They're data-dependent, their workflows have structure and volume, and they have very fertile ground.

Agents are tools, not replacements
Humans in the loop aren't a fallback. They’re part of the design
Here’s where most teams get it wrong. They treat agents like a replacement for headcount. Cut the team, let the agent handle it.
This approach creates a trust problem.
Organizations hesitate to deploy agents because they assume the alternative to human work is full autonomy. And when agents appear to be operating without clear oversight, trust collapses.
Agents aren’t meant to replace judgement. Agents are most efficient when they handle volume and routine while humans manage exceptions and high-consequence judgement calls. The goal isn’t to eliminate human judgement. It’s to reserve it for where it matters most.
Only 6% of companies fully trust agents to run core business processes autonomously. That's not a technology gap, it’s a trust gap.
The organizations making progress with this are the ones that define clear boundaries. What can the agent do independently, and when a human needs to step in.
The Bottom Line
Agents aren't magic. They're tools - powerful ones, but still tools.
They work best when:
The process has clear rules and recurring patterns
The data is structured, sufficient, and accessible
The goals are defined and measurable
The stakes allow for iteration and error
There's a human in the loop for high-consequence decisions
If most of that is true for you, you're ready. If it's not, you're not ready yet. There's no shame in that.
The organizations that succeed aren't the ones that rush in first. They're the ones that get the foundations right.
Start small. Prove value. Build trust. Expand based on evidence.
References
https://www.mckinsey.com/capabilities/operations/our-insights/when-can-ai-make-good-decisions-the-rise-of-ai-corporate-citizens
https://hbr.org/2025/11/ai-agents-arent-ready-for-consumer-facing-work-but-they-can-excel-at-internal-processes
https://hbr.org/2025/08/beware-the-ai-experimentation-trap
https://fortune.com/2025/12/09/harvard-business-review-survey-only-6-percent-companies-trust-ai-agents/


