The Chamber 🏰 of Tech Secrets is open.
I’m traveling this week for work, but I want to share some thoughts I had over the weekend and see how they resonate. Once I get back home, it will be time to dig into the Edge Build series, so stay tuned for that.
Our Agentic Future
I saw two things during my Saturday morning reading that led to a bit of an epiphany for me (and I am probably way behind) as it relates to AI Agents.
Let’s start by diving into the two ideas individually, and then make the connection.
Idea 1: Triangle of Talent
I came across the “triangle of talent” from Shaan Puri. It looks like this…
This post was pointing towards the minimal mix of talent levels based on organizational size. When you are an early startup, you need 100% of people in L5. As you grow, the ratio changes in Shaan’s opinion (and mine—you may need people who just complete tasks if you tell them WHAT/HOW/WHEN).
While my own version of this might be less blunt (“L1 - Useless” and “L2 - Task Monkey”) 🤣, I think the framework is strong and forces thinking about the strengths and capabilities of your own team. Do you have the right mix? Who has potential to move to Level 4 or 5 with more active coaching and mentoring?
Okay let’s continue to the second idea…
Idea 2: Three Observations
Shortly after, I read the blog post, Three Observations by Sam Altman. I’d recommend reading it, but in short, Sam makes 3 observations about AI.
“The intelligence of an AI model roughly equals the log of the resources used to train and run it (training compute, data, inference compute)… It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.”
“The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use”. I’m calling this “Altman’s Law”.
The socioeconomic value of linearly increasing intelligence is super-exponential in nature.
In summary, AI is going to get smarter with more investment, it’s going to get cheaper, more people will use it, it is going to make things better. So the investment is going to continue.
Then he turned to the idea of agents [emphasis mine]…
“Let’s imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others.
Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.”
Altman’s vision of a world filled with AI agents—thousands or millions of them working as junior assistants—raises an interesting question… How do we ensure these agents are actually useful? Done poorly, we risk creating thousands of inefficient, poorly managed agents rather than a true productivity revolution, and the potential for a management nightmare.
This leads me to my weekend idea…
Interlude: What is an agent anyway?
Since we are going to talk about agents, we should be clear about what they are.
An AI agent is a system that autonomously takes actions toward achieving a goal within a defined environment. It typically…
Receives input (natural language, structured data, etc.)
Processes the input using reasoning, planning, and memory
Acts on the world by making API calls, triggering workflows, or interacting with other agents/humans
Iterates and adapts based on feedback
Agents are software. And to be an agent, it must have agency.
Idea 3: Triangle of Agents
Now that we agree what an agent is, we can connect the previous two ideas together, and reveal the “Triangle of Agents”, which I propose as a way to think about the capability of agents and their usefulness in organizations.
Yes, you can make fun of my imperfect triangle. It is harder to make a segmented triangle in miro than I expected! If only I had an AI Agent to do it… 🤔
Let’s unpack each of the triangle levels…
Level 1: Brittle and Useless
These are agents that require extensive supervision and frequently fail to complete assigned tasks. Many enterprises will produce Level 1 agents in the short term as they experiment with AI, often driven by cost-saving initiatives or hype rather than robust strategy.
Many of these agents will be based on strong business ideas but will fail due to poor integration patterns and architectures, such as over-reliance on point-to-point integrations or brittle workflows that require direct database manipulations. These fragile solutions will ultimately add little value, frustrating both customers and employees.
Examples include things like customer support chatbots that step through a complex customer support journey but fail to resolve the issue for the user due to something that could have been discovered at the beginning OR A workflow automation tool that crashes when encountering unexpected data formats.
Level 2: Task Completers
“Tell them what, how and when and they will perform a task”.
Task Completers are highly structured and valuable for automating routine workflows, but they lack adaptability and independent problem-solving capabilities. They are good at understanding well-defined user instructions and execute them reliably while functioning within strict operational boundaries. This is great for repetitive, rule-based tasks with precision.
Examples could include an AI-powered scheduling assistant that books meetings based on predefined rules or a customer service bot that processes refunds following a strict workflow.
Level 3: Problem Solvers
“Tell them what to do, and they will figure out how.”
Problem-solving agents bring a major leap in AI capability. They not only execute tasks but also determine the best approach to completion. They dynamically adjust their methods based on context, even if the specific steps weren’t predefined. These agents are particularly effective in augmenting human workflows and increasing productivity.
In the near term, these agents will likely operate with human-in-the-loop oversight to build trust, but they may eventually work autonomously in many cases.
These agents are defined by their ability to adapt to new or slightly ambiguous situations without explicit step-by-step guidance. They utilizing reasoning capabilities/models to optimize task execution, and then perform multiple steps independently within a given goal framework.
Examples could include a coding AI that not only suggests fixes but also runs tests and deploys patches or an AI-powered research assistant that autonomously gathers, summarizes, and analyzes relevant information for decision-making
Level 4: Systems Thinkers
“Tell them the problem, and they will figure it out using people and processes.”
Systems Thinking agents move beyond individual task execution and begin to coordinate across multiple teams, processes, and systems. These agents don’t just complete tasks; they identify dependencies, optimize workflows, and orchestrate both human and AI resources to solve complex business problems.
For these agents to function effectively, enterprises need to invest in some foundational components
API & Event Registry: What APIs and enterprise messages are available? What do they do? How do you authenticate to them? How do they translate into business terms?
Agent Identity Management: Defining least-privileged authentication, authorization, and resource access for AI agents
Agent Registry: Repository of existing agents and their respective capabilities
Organizational Structure: who is responsible for human approvals? Who are the people to escalate to in the event of an issue?
Systems Thinkers are defined by their ability to coordinate multiple AI agents and human workers to achieve a higher-level business goal. They can learn to understand an organizations available resources and apply them to making decisions that impact multiple systems or business areas / departments.
One example would be a business operations AI that optimizes workforce distribution, supply chain logistics, and inventory management in real-time.
Level 5: Artificial General Intelligence Agents
“They identify the right problem and get it solved.”
AGI agents are autonomous decision-makers capable of identifying opportunities, predicting challenges, and proactively solving problems. Unlike Level 4 agents that require a clearly defined problem statement, AGI agents determine what needs solving and take action accordingly.
This level requires the highest degree of autonomy, reasoning, and context-awareness. Full realization of this level is likely years away (if ever).
It is defined by the ability to learn and maintain a near-complete context of an organization and then apply reasoning to prioritize impactful problems, research solutions, provide recommendations, and farm out action.
People and Agents
For most organizations, the lowest hanging fruit is implementing Level 2 and 3 agents, with some possible opportunities for Level 4 emerging. This will be a massive productivity boon for adopters.
We must remember that all of these agents will do what they are asked within their capability (which will increase) and permissions set (which is probably poorly managed today).
This means that agents give us the potential to complete massive numbers of tasks very quickly, which is great. It also means that agents give us the potential to create massive problems — at scale — very quickly if we ask for the wrong things, which is not great. You will get what you ask for, so the context (aka data) you possess in your head and your ability to translate need into agentic action will be extremely important.
The skill of asking for the right things in the right way will become increasingly important. So… the era of Agentic AI may mean we need more Level 4 and Level 5 people on our teams, farming tasks to Level 2 and Level 3 agents. A Level 5 human paired with Level 3 and Level 4 agents will have massive productivity potential.
What is Brian doing today?
Personal: Use reasoning models to research things quickly — I run a lot of ideas through Google Gemini thinking models for personal purposes. I allow thinking models to have agency over creation of a report (say on market research for investment purposes) or research output of an idea (best approaches for bootstrapping an edge node) and then I digest it to make decisions.
Personal: I use multiple language / thinking models in parallel to refine and enrich blog posts. I don’t write these things by myself anymore… I have a team of AI friends from OpenAI and Google Gemini helping me flesh out my ideas and enrich them with context and examples, and they work for free! The agency is over a draft which is followed by my feedback and refinement… but it would be fun to set up a game of pinball between the two and see what they create (and how the loop ends).
Work: Learn as much as I can about agents, agent-frameworks, and agent-tangential opportunities (especially in knowledge/system/api discovery and agent idm). I have yet to endow agency to anything work-related, but hopefully opportunities come soon!
Conclusion
Now we have a decent model for thinking about the usefulness and productivity of AI agents, and some ideas on how they might be able to help us at each level.
Remember: People are still going to be critical!
Asking for the right things in the right way will matter more than ever. Leadership will matter more than ever. Having a large context will matter more than ever. Coaching people to develop themselves into context-equipped Superstars (Level 5) will matter more than ever.
The organizations that win will be the organizations that harness the power of AI at Level 4 (and maybe 5), but to do so, they will need Level 5 leaders.
"useless", "task monkey" hah, gosh, avoiding those type of titles is my motivation to learn how to incorporate new tools to help me get to L4/L5.