The evolution of the expert: Why technology leaders must build their own "second brain"

Peter Chomowicz, program director of the master's in technology leadership program, explains that today’s leaders must build a “second brain” by partnering with agentic AI to navigate rapid change. Reflecting this shift, the MTL program now includes a new course, “Agentic AI for Technology Leaders,” which equips professionals to build and deploy their own AI agents.

Man typing code on a computer

In my years leading the master's in technology leadership (MTL) program at Brown, I’ve seen countless technological "revolutions" come and go. But what we are witnessing right now with AI isn't just another cycle of automation, it’s a fundamental shift in the very definition of a "leader."

Our students, including senior directors, VPs and technical leads, are often 15 to 25 years into their careers. They have built a "moat" of specialized knowledge, yet many now feel a new kind of "expert anxiety." They find themselves making high-stakes decisions in a landscape that changes weekly, caught between the boardroom’s demand for AI results and the technical complexity of the engineering floor.

To address this specific shift, we have expanded the MTL curriculum to include a new, dedicated course: “Agentic AI for Technology Leaders”. We didn't add this course to simply discuss AI strategy in the abstract; we added it because we believe the modern leader needs to move from managing human workflows to orchestrating human-agent ecosystems.

"The future isn't just about 'using' AI," Vikash Rungta, the faculty lead for this new course and an expert in agentic workflows said. "It’s a world where you aren't just hiring humans anymore. You are hiring and building agents to work alongside you as partners."

Moving beyond the chatbot

Most organizations are currently stuck at what we call the "assistant" level. They use AI to draft an email or summarize a meeting. This is helpful, but it’s incremental. It doesn’t change the architecture of how a leader operates.

Agentic AI represents the next frontier. Unlike standard LLMs that wait for a prompt, an AI Agent is:

  • Proactive: It initiates actions based on goals.
  • Persistent: It maintains memory across sessions.
  • Tool-Equipped: It can execute multi-step workflows and use external software.

Rungta describes this as a "second brain", a structured digital partner that understands your decision-making style, project facets and professional network.

The vision: Imagine an agent that doesn't just summarize a report but recalls last quarter's decisions, cross-references them against your Q3 goals, flags the risks you forgot you flagged and drafts a plan to address them — all before your morning coffee.

From theory to deployment: Building your agent

The most critical differentiator of this new course is its tangible outcome. You do not just learn about agents, you leave the course having built one. We know that for a VP or director, the "aha moment" doesn't happen in a lecture, it happens in the field. 

To identify the best automation opportunities, students use the TOIL Framework. Originally a concept from software engineering, "TOIL" refers to work that is manual, repetitive and provides no long-term value. In this course, students perform a radical audit of their workday to flag tasks that are:

  • Task Repetitive
    • Is it basically the same every time? Are the inputs and outputs predictable?
  • Opportunity to Automate
    • Can GenAI handle this with minimal human babysitting?
  • Impact on Business
    • If we automate it, do we move something that matters — revenue, compliance, CSAT, output?
  • Latency Reduction
    • Is this task holding up other people or downstream work?

Students spend the course building a custom proof-of-concept AI agent that they can immediately deploy within their organizations.

"You learn them by breaking them"

There is a specific philosophy we hold at Brown: You cannot lead what you do not understand. In the development of this course, Rungta noted a key truth that stuck with me: "You learn these systems by breaking them."

By building their own agents, leaders learn to:

  • Identify where AI hallucinates.
  • Diagnose where logic loops fail.
  • Red-team their own deployments for security and ethics.

This "boots on the ground" experience is what gives them the authority to return to their teams and say, "I’ve built this, I’ve broken it and here is exactly how we are going to implement it responsibly."

The competitive moat of the future

The "competitive moat" of the past was built on technical expertise. The competitive moat of the future is built on agentic agility. At Brown, we believe the leaders who will win aren't the ones who deployed AI the fastest. They are the ones who moved most thoughtfully from a "human-only" mindset to a human-agent partnership. 

By building and implementing your own “second brain”, you aren't just keeping up with the technology, you’re redesigning the very nature of your leadership for the era to come.

Related news

Most AI deployments fail not because the technology is flawed, but because leadership responsibility is. Professor of the Practice of Leadership, Baba Prasad, outlines four levels every leader must master to deploy AI wisely, sustainably and competitively.
Read Article
Professor of the Practice of Business Analytics, Andrew Banasiewicz, Ph.D., explains why even experienced leaders can be misled by instinct and why grounding decisions in evidence leads to better outcomes. He explores how an evidence-based approach helps leaders cut through bias, make clearer decisions and build more confident, informed organizations.
Read Article