Agentic Mesh Ecosystem Patterns with Eric Broda

Jan 24, 2025 | AgileData Podcast, Podcast

Join Shane Gibson as he chats with Eric Broda on the patterns required to create an ecosystem to support the use of Agents in enterprise organisations.

Listen on your favourite Podcast Platform

| Apple Podcast | Spotify | YouTube | Amazon Audible | TuneIn | iHeartRadio | PlayerFM | Listen Notes | Podchaser | Deezer | Podcast Addict |

Podcast Transcript

Read along you will

Shane: Welcome to the Agile Data Podcast. I’m Shane Gibson.

Eric: Hey Shane, my name’s Eric Roda. Thanks for having me on the show.

Shane: Thanks for coming on the show again. I think second time. Today we’re going to rip into this thing called the agentic mesh. But before we do that, why don’t you give a bit of a background about yourself to the audience?

Eric: Yeah, sure. I’ll try and be brief. Out of Toronto, Canada, I’m 40 years, almost 40 years in the industry, almost all of it in tech, executive at big banks and global insurance companies. And about seven years ago, started my own boutique consulting company and we build ecosystems for companies. Started off back in the API days.

We built service mesh in the data mesh days. We built data products and put them into a data mesh. And now, what we’re working on, we’re building on some of our Gen AI expertise, and we’re actually building enterprise grade autonomous agents and putting them into an ecosystem, and we’re calling that the Agentic Mesh.

So that’s been, uh, taking a lot of our time and our focus. I’ve written a book with O’Reilly called Implementing Data Mesh. It was published in October. They’ve given me the okay to write the next book, my next book with them. And that’s agentic mesh around this whole topic. We’re actually going to talk about today.

And I’ve also written a number of articles on Medium. I think one that many people, hopefully many people on the, that are listening to the podcast may have already read, entitled agentic mesh, which seemed to really resonate. So anyway, that’s a little bit about me.

Shane: Excellent. So Medium still, you haven’t moved to the substack ecosystem yet?

I’m thinking

Eric: about it, but trying to build this agentic mesh type capability writing on Medium and then having yet another substack or otherwise, it’s just, it’s one bridge too far for me right now, my friend.

Shane: It’s a bit like data platforms. I mean, you can replatform your data platform to keep yourself busy, but actually add no value to the organization.

So, focusing and delivering on the real value is a good way to go. So, let’s start out with what’s your definition of Gen AI. So, when you say the word Gen AI, what do you talk about? What do you mean?

Eric: Sure. Gen AI, generative AI, is about large language models. There’s a lot of definitions for it, but for me It’s a superpower that gives us the ability to use technology in a cool way.

It allows us to actually interact in natural language. Uh, and if you have a good enough model, they may even be able to reason. If I’d be really simple, it’s a superpower that lets computers be heck of a lot smarter than they have been in the past. And by being smarter, you can do all sorts of cool things.

People have started off using it as a content creation tool. But I think what we’re going to see as we, as it progresses, we’re going to start to see LLMs become a cornerstone in a broader ecosystem and we call that. And agents, you’d ever have one, you have a bunch of them and they all interact in an ecosystem.

So that’s a little bit of what I call generative AI, but probably I’d dovetail a little bit in terms of its evolution, perhaps.

Shane: Yep. That’s good. It’s always good to get an anchoring statement that gives me some context when you use those terms. And then another anchoring statement then. So when you talk about an agent, how would you describe that to somebody?

Eric: Sure. An agent is, it uses an LLM, large language model. It can plan its activities, it can then execute those activities or tasks, and it can use tools to actually do that. Along the way, a more sophisticated capability is it can actually learn from its past interactions and actually start to create some brand new capabilities.

Although that I think is, in terms of some of the current tool suites, that may be a little bit tricky today, but that’s the, that’s what I think of an agent, I think of not just what it can do today, but perhaps where it’s going in the future.

Shane: Okay, so let’s play that one back to you. So basically, agent, plan the work to be done, execute the work to be done, and even now or in the future, depending on tool set and capability, learn from the work it did and change the way it plans it to be more efficient, more effective.

This is effectively how you see as an agent.

Eric: Yeah, absolutely. There’s probably one thing I did forget. When I think of an agent, especially compared to an LLM or if you go to Chat GPT, for example, you have a user interface, and until recently it answered its questions based on what it knew. You’re starting to see a little bit more where it’s able to actually access the internet.

So broadly speaking, those are tools. So an agent is also able to interact with its environment through tools. So that’s probably one of the things that I did not add in the first thing, but would definitely constitute maintenance.

Shane: So, gen. ai is an LLM. Agents have the ability to plan, execute, potentially learn to do a task.

So that leads us on to this, this term, agentic mesh. Give me the explanation for that puppy. Sure.

Eric: This is going to be a long explanation. Perhaps. I’ll try it in fairness to the listeners here. I’ll make it really simple. Agentic mesh is an ecosystem that lets autonomous agents find each other and safely collaborate.

Interact and transact. It’s that simple. There’s a lot of details and implications of that, but that’s the simplest definition. As you start to dwell into that, you probably need to think about, if I want to actually use this in a corporation, in a company, I probably have to think about things like making each agent enterprise grade.

And that has a whole series of implications. If I think about agentic mesh as an ecosystem, uh, I probably need to think not just about the consumer experience, but also the producer experience. How does somebody actually create an autonomous agent and actually have it published so folks can actually find it?

And last but not least, when I think about agentic mesh, it also fosters the agent experience. So how do agents actually find each other, interact? So there’s, and then there’s another experience plane around how an operator can manage the ecosystem, but that’s the fundamental three capabilities that the agentic mesh, the ecosystem actually provides to each and every agent.

Shane: All right, let’s unpack that. But before I do that, can you just tell me what the planes were again? There’s an experience plane.

Eric: Sure. There’s two experience planes, one for consumers and one for producers. So consumer is somebody who wants to engage in an agent. The producer plane is one, a group who wants to actually build agents.

There’s the agent plane. So now that consumers can find and engage agents and producers can create them, how do the agents in fact find each other, interact and collaborate? And the fourth one, which is, I suppose it’s mostly implied, but it’s just as important as the operator plane. So now that I have How do I ensure it’s stable, reliable, and resilient?

Shane: Lots of good detail on there. So let’s start unpacking there. Uh, so let’s start off with the term mesh because we’ve seen, uh, a whole lack of clarity in that term in the data space when data mesh came out. We saw the data fabric bullshit from Gartner where they tried to compete with And so when we talk about Mesh, we can talk about a technology view or socio technical thing.

Organization, team design, structures versus technical ecosystems. So when you’re talking about Mesh, are you talking about a form of technology, or are you talking about team design, organization design, and a sense of decentralization?

Eric: I think those two are implied. Um, but I’m going to, I’m going to say the mesh is the ecosystem.

So let me give you a very practical example. There’s probably ecosystems that you use and our listeners here use. I don’t know about every day, but quite frequently. So the most common one that I talk about is Airbnb. Airbnb allows consumers to find properties to rent. It allows producers to actually make their properties available for folks to rent.

Airbnb is the platform that lets consumers and producers actually find each other. Interact and transact. That’s what the ecosystem is. So for me, a mesh is just an ecosystem. And if I have a data mesh, it lets folks who want to consume data and folks who want to produce data, find each other, interact, collaborate, and transact.

So when I think about a mesh, I think listeners should actually think Airbnb or Uber, where Uber is an ecosystem that lets folks who have cars that want to provide rides, find and engage with. Folks that have cars that want to provide rides and the platform, the mesh brings those folks together, allows them to find each other, interact and transact.

So yes, there’s been a lot of confusion. No thanks to folks like Gartner and others that have really made the concept difficult to understand. It shouldn’t be. It is actually really simple. We use mesh capabilities. We call them, Ecosystems. We call them Airbnb. We call them Uber every single day. It’s just a technology and a capability that allows.

Producers and consumers define each other, interact and transact. Now, you brought in things around the teams and the technology. Obviously, a platform that provides that capability has to have some operating model, a business model, and has to have teams that actually provide that capability, some governance capability.

It also has a technology. There’s a lot of different ways to implement the technology. I guarantee you that Uber’s platform is different than Airbnb’s, even though they provide a somewhat similar capability, although obviously different in the details and how they execute, but it really shouldn’t be any more complicated in that chain.

Shane: So ecosystem is, is a form of a network. It’s a form of a boundary where people create stuff, people consume it. In between those, the task of creation and consumption, these are doing Uber driver drives, and then there’s some form of operation framework, operating model, platform, ecosystem technology that puts all that together and helps it.

Do it effectively. Okay. So that’s Mesh. And then agentic, I’m going to guess here, right? Agentic basically says instead of a human doing the doing, we’ve got an LLM doing it because it’s the term agent. Is that what you mean?

Eric: Yeah, I’ll be a little bit more detailed, but I think the simple answer is yes. So agentic, the definition, if you go on Google or wherever, it means you’re able to make independent decisions in pursuit of a goal.

Agentic AI uses LLMs. And sophisticated capability to do reasoning and plan the work and iterate through it. So, it’s really that simple. Now, what do they use to actually do that? Obviously, large language models come into play. Small language models. When we start to get into it, I’m sure we’re going to talk about how the fact that everybody’s focused on these large language models.

But we’re finding, based on the footprint, for example, on phones or on edge devices, that small language models are also going to become in vogue. So really, agentic and agentic AI just means that we’re using sophisticated tools like large language models. To make independent decisions, plan activities, and actually execute.

Shane: Okay, but the key is the machine’s doing it, not the human.

Eric: The key is the machine is doing it, but there is a human in the loop, which we’ll get into, I’m sure, in a little bit more detail. Let me give you a really simple example. Let’s say I am, I don’t know what a large bank is in your neck of the woods, ANZ.

Am I, is that a big bank?

Shane: It’s a big bank for our side of the world. It’s probably a tiny bank for your side of the world.

Eric: So for your readers in that neck of the woods, ANZ or in my world, Royal Bank in Canada, perhaps, or Wells Fargo or somewhere. But all these banks want to do a really simple thing, is they want to get you, Shane, as a customer.

How do they do it? The very first step is if they found you, then they want to open up a bank account. That’s not a very, intuitively a very simple endeavor, but it’s not all that simple. For example, in any typical bank opening, I’ve worked in banks so I know this, although I will trivialize for this discussion here, but one of the very first things is to confirm an identity.

We have to verify that Shane is actually Shane. So before we can actually open a bank account, but once we know that Shane is Shane, we actually want to do here. We call it a KYC or know your customer. And once we know what your risk profile is and what kind of, what you want to do, we may actually open a bank account, maybe a checking account or a demand, uh, deposit or whatever the case may be.

And then we actually make an initial deposit. Then we tell you it’s all done. And lo and behold, you can now start to use that. In a typical bank, what we have today, without any of the agentic stuff, there is an army of people that actually do that behind the scenes. Having gone through this a little while ago, I can tell you that we went through and probably had I’m going to say about 20 to 30 electronic forms that I had to sign.

Along the way, I was poked and prodded very nicely, mind you. These are fantastic folks that work with me at the bank, but they poked and prodded me via text messages, via emails. Eric, did you get this? Did you get that? Oh, Eric, you missed this. Can you do that? Blah, blah, blah. There’s an army of people behind the scenes shuffling, not paper documents anymore, but electronic documents.

Now, here’s the thing is there’s a human in the loop. Absolutely. It’s me. I’m the person that initiates the request. There’s probably other folks, humans in the loop that ensure that whatever the process is, it’s actually governed and it has rules and policies and that those policies are adhered to, but here’s what’s different.

In the agentic mesh world, I would suggest a year out, two years out, we’re going to start to see banks, innovative banks, perhaps ANZ, that will start to automate pieces of that. Today we’d use RPA and other technologies, workflow. But as you notice, the banks that I was working with, Fantastic Bank, they’re a leader in this type of stuff, but there’s still people behind the scenes.

And why? Because the processes are not amenable to unstructured content. They’re not amenable to the fact that there’s missing information. So what you have is people in the loop. My proposition is a lot of those people in the loop today, humans in the loop can be represented by agents. Agents that can actually ingest those electronic documents.

They can text me if stuff is missing. They can email with confirmation. But all those things are very practical. But can’t be done today because of the unstructured nature. And it’s not a static, well defined flow. It’s a little bit haphazard. If I miss something on one form, maybe some sophisticated workflow tools or RPA tools may find that, but if it, as the document passes through multiple people’s hands, that becomes typically untenable, which is why we have an army of people behind the scenes.

That army of people can be replaced by agents. That army of agents will be governed by people. Every agent will have an owner. who will be responsible for ensuring that agent actually operates to its purpose and its policies and somebody in there, the person, the owner and the team obviously supporting that, will be responsible for fixing that agent when it strays out of its guardrails.

So there’s absolutely a human in the loop, but the internal machinations around how something gets done changes and more and more of what I foresee and although my crystal ball is foggy at the best of times, what I foresee is that some of those people will be replaced by agents. So, a long winded way of saying there’s always going to be a human in the loop, minimally from the initiation, but especially from the governance side, I think will take a very prominent role.

Shane: I think you’re right. I think RPA was a great idea, uh, but a bad technology and a bad implementation. The reason I say that is most of the The RPA processes I saw was the large consulting vendors bringing in teams and teams of junior grads, uh, and trying to configure software to automate processes. And as you said, it only tended to work when the data was structured.

It was expensive. It was inflexible. It didn’t deliver the dream. And so I can see how agents powered by LLMs potentially can, can give us the answer to that dream. And the way I articulate it, as I talk about Ask AI. Assisted AI and automated AI. I talk about ask AI being a chat interface. A human asks a question, gets an answer, ask a question, gets an answer until they’ve got to the stage they want to where they can go and take an action without the machine.

Assisted AI where the machine’s watching what we’re doing. And it’s coming up and making suggestions, recommendations, saying, Hey, I think you should do this next. But the human is still making that decision. And automated AI is where the machine does it and we don’t see it. And so I’m with you. I think what we’ll see is these long business processes that are quite complex, decomposed down into small processes, effectively microservices for business processes.

Some of those will be automated, but very few. And the majority will be assistive. And in the previous podcast that I did on this, the guest talked about this idea of maturity. So people watching it in assisted mode for quite a long time. And when they’re comfortable, a small agent being flicked to automated.

And if it works, it stays automated. If it doesn’t, or something changes, it moves back. So I’m with you in terms of. Nodes and links, lots of small things. It’s the way we should do our transformation code and our data platforms. Not one big blob of code to run them all and not one agent to rule the world.

I really want to talk about this idea of this ecosystem, this idea of these experience planes. And what they are, because it sounds to me, a very similar pattern to a data platform. Data’s being produced, or something’s being produced. Somebody does something to it, somebody consumes it, and then we want to orchestrate and operate it as a system, as a factory effectively, with some efficiency where we can.

So, where do you want to start? Let’s go through the planes, is that one way of doing it?

Eric: Yeah, sure. Let’s start with the consumer plane. It’s probably the simplest. I suspect this will change and evolve over time and get much more sophisticated. But I’ll start with a trivial and simple example that everybody understands.

We foresee a marketplace for agents. There will be An app store for agents, the agent store, whatever you want to call it. There’ll be two personas, perhaps the public one. So maybe somebody, maybe IBM, maybe Google, maybe Microsoft will have the intergalactic agent store at some point in the distant future.

But I think the most common situation is inside the four walls of an enterprise. So if I’m ANZ or Wells Fargo, how do I actually do this bank account open and drive some efficiencies, maybe even a better customer experience. So the first stop is somebody comes in and says, I’m going to ANZ. What do I actually want to do?

Let’s see what. What capabilities ANZ may offer. Now, they don’t actually have to be an agent marketplace that explicitly, you’re going to go just to ANZ to see their agents, but you’re, what you’re going to have is some kind of nice and easy to use user interface that lets people, real people, find the capabilities or go the agents that the bank actually has.

So there’s the trivial, uh, instantiation that is a marketplace with tiles and stuff like that. And that can work.

Shane: Let me just think about that one. So, we’ve had this digitization of government dream in the world for many, many years. Moving from those horrible paper forms that I have to download, fill out, scan, send back.

Heaven forbid I have to go into a location for that agency. To this idea of a portal where I can go and do everything from cradle to grave. I can go from registering a birth to registering a death. And it’s never really been delivered. I mean, it started, but if you go look in the back office for a government agency, it’s a nightmare.

It’s a bunch of cobbled together processes with a large number of people, a hell of a lot of Excel. Often it’s because their business processes don’t match enterprise. There is only one immigration department in each country versus a manufacturing organization. So from a software point of view, why would you build a solution that only has 300 potential customers if you’re lucky?

But what I think you’re saying is that agent store, that agent marketplace is kind of that drink where I can go in and say, here’s the task I need to be done by this organization or third party that’s outside my sphere of influence. And therefore just let me go in and do it. Is that what you’re talking about?

Eric: Yeah, I’ll give you an example. So I work at a. The number of clients and almost every one of them gives me their own laptop. I know if I’m fortunate, it’s a Mac, but I’m a Mac fan, by the way. But every one of them says go to the Mac app store to download the enterprise versions of their stuff. Okay, so that’s the analogy.

By the way, they’re not all that sophisticated, okay, but they do the job. So that’s an example of what is actually out there today. And I can foresee the very first wave, I would suggest a sub optimal, but trivial, but easy to implement type basic inside the enterprise app store. What I do think, though, will be much more compelling is, I think, folks have really found Got used to, and maybe even fall in love with the chat GPT style interface.

What’s not to like about talking in really simple English. What’s not to like about not getting 20 ads that are sponsored ads that are not relevant to me, but rather have something that actually answers my question. So I think what we’re going to see is as opposed to going to a marketplace. Again, I, I do foresee that as the first easy step that will very soon be augmented with a very simple, you know, search thing that you’re going to ask a question in really simple English that says, I’m going to, I’m going to the ANZ website.

What do you want to do? And. You’re going to ask that question in really simple English. And Shane’s going to write out, my name is Shane, and I want to open up a bank account and I want to put a hundred dollars in it or a hundred, I’m not sure what the currency is over there, but I want to put some money in it and it’s going to then figure out what that agent is and it’s going to launch that agent.

And that agent’s going to do the steps that I mentioned earlier. They’re going to verify your identity along the way. It’s going to ask you for more information. Hey, Shane, we want to verify your identity. Can you give us a driver’s license or some other official thing? You’re going to provide that, upload it, goes in and they say, Oh, now we want to, we got to fill out the know your customer thing.

What’s your risk profile? Do you want to do trading? Blah, blah, all that kind of stuff. And then it’ll finally say, do you want a checking account? What type of product do you actually want? And it’ll deposit the money and lo and behold, something will get back, but it’ll be a chat like interface. I think in the near future, it’ll probably be a voice based.

Multimodal interface very soon, but that’s how I see things evolving. So is it the agent store that people are going to interact with? Probably not, except that perhaps as a first generation within an enterprise, more likely than not, it will be a multimodal chat like interface that probably initially will look like chat GPT, but very soon we’ll evolve to something that looks very different.

That’s my prediction. How far that is away. I’m going to say two years, two and a half years. And whether it gets implemented by the way, in a bank, that’s a different question. I think there’s a whole story around how to manage risks and how do I introduce technology into an organization? And they’re probably going to start with simple expense reports, for example, before they do the bank account open.

So I’m not naive in that respect, but I just use that as an example because people understand it.

Shane: I just read a report that Accenture are earning more from doing, and I’m using ear quotes here, AI consulting, than, uh, earn. Actually building software solutions that deliver it because they’re going in and doing million dollar prototypes that have absolutely no value.

So that’s where an enterprise will typically start from is doing that first.

Eric: I’m going to say a few things here. First off, I know a lot of really good, great people at Accenture, Deloitte, and all the consulting. I used to work there. But here’s what I would say is, uh, and I’m not gonna, I’m not gonna cast aspersions on an organization that’s trying to make some money and actually does it by helping other organizations take advantage of this AI, but there’s a grain of truth to what you just said, which is today, and this is, I’m hoping we can get into this part of the discussion.

Almost all the AI work that I see is in the class of what I call science experiments and organizations are in the. Throws of the death of a thousand POCs. And here’s why. There’s one fundamental, basic reason. But organizations are doing this is because the current crop of agent toolkits are not enterprise grade.

So what consulting companies are doing, I would suggest are working with organizations to experiment with what’s in the realm of the possible. And to be honest with you, they should be compensated for that. For a consulting company that wishes to go in and set expectations here, when we’re still at the science experiment or POC phase, and by the way, that’s not what I’m saying they’re doing.

I think they actually manage expectations reasonably well. But I think what folks on the outside looking in are saying, we’re paying a lot of money to quote, learn, and that’s what they’re doing. They’re paying a lot of money to learn what’s within the realm of the possible today. But I think with the advent of enterprise grade agents and enterprise grade generative AI, I think you’re going to see a world of difference.

We are absolutely positively not there yet. But we know what good looks like and we can actually get there and I’m hoping that we can actually talk about what that means.

Shane: I’m going to disagree with you there politely. Let me just hash this one out. The problem we have in the data space and we have in the technology space is that a lot of people, large vendors, consulting companies, Individuals jump on the next wave.

They sell the shiny, they deliver no value. They make things look complex when they should be simple. They don’t optimize for effort because they are paid for hours. And so we have a major problem with the gen AI, LLM and AI space where we’re about to go into that again. And we do a disservice. So when data people don’t test their data and make sure the data is accurate, It’s exactly the same as when consulting companies jump on the next bandwagon and make a large amount of monies out of organisations and don’t add value to the organisation.

So, end of rant there. What I want to do is I just want to go through those four planes so that we understand them and then talk about what the problem from an enterprise is. Let me just replay what you told me around agent stores. If I think about it, what you’re saying is in the past, we had those shitty chatbots on a website where I wanted to get a task done, and it would give me effectively the menu system from the 0800 number one, two, three, four, and it would never answer my question.

With the ad vendor technologies like LLMs, what it means is we now have the ability to make them multi modal. So I can type, I can talk. It has the ability for me to use. Any text, I can ask a question in the language I use, and it’s very good at then interpreting it into a language it understands. And also it allows us to not script the path.

So in the past, you basically had to, in the chatbot, do a decision tree that said, if then, else then, if then, else then, if then, else. And the LLM enables us to build systems, agents, that actually take us on a dynamic path based on the interaction with the user. So, I’m on board for that one. I understand that from a consumer point of view, take me through the next three planes, and then we’ll go into the enterprise space about why this is so difficult to build.

Eric: Sure. There’s five planes, just to recap for our listeners here, the consumer plane, which we’ve just talked about. Next one I’m going to talk about is the producer plane. The other three are the governance plane, the agent plane. Okay. And then the operator plane. So let’s talk about the producer plane next.

The producer plane is the user interface, APIs, command line interfaces, and I’ll just glom them together in terms of a way of interacting, but the producer plane lets organizations, people, build agents, build autonomous agents. It provides. Among other things, the templates. It provides the toolkit that allows them to plan their tasks, execute their tasks, use tools.

It provides the toolkits that make them enterprise grade. Simplistically, the ideal producer experience is something that makes it easy for creators to actually build. So that’s what the producer plane is primarily about. Related to that though, and I’ll use the Apple App Store analogy, now that I’ve created my app, or my agent, now I want to see if I’m making some money.

There’s that dashboard, there’s some ability to monitor, there’s ability to release upgrades, manage your versions. All that type of stuff is part of the producer experience. The next one is the governance plane. And by the way, sometimes I glom them into the consumer and the producer one, just because it’s not as intuitively obvious, but it’s crucial.

So there’s a governance workbench that a governance professional will be able to use. And what it does, it says, I want to take a look at an agent. Now I want to be able to see what it’s been doing. and what its policies are. And I want to be able to set up a way of ensuring that it’s doing what it’s supposed to.

So what that says is although the owner of the agent is required to actually implement these things, this is delegated or a federated ownership model in, in almost all circumstances. So the owner has to actually implement the policies first off at that point, The organizational level, whether it’s GDPR or privacy, security, or otherwise.

But then there’s also the actual specific purpose of the agent. How does it actually do identity verification? Who does it, which third party identity verification agencies does it actually work with? And what constitutes a safe, And a valid identity verification. Those are policies that are agent specific.

So somebody needs to say what the macro level policies are and what the agent specific policies are. And then, just as importantly, somebody has to say, how do I actually know if that actually works? So what is the automated mechanism, ideally automated? But at times you do need to have some manual certification, if you will.

But what’s the mechanism of actually proving that your agent is doing what it’s supposed to? What is in human terms, what’s the performance appraisal that you actually get for your agent? The human analogs, by the way, are exactly almost perfect for it. The governance plane actually says, what are the obligations and the operating model that allows.

Agent owners to live up to their accountability and responsibility. And how do I provide, and I call it certification mechanism as opposed to a governance mechanism. Governance implies policies and policing. Certification implies, I don’t know what it is out where you are, but here in Canada, we have a Canadian standards organization in the U S it’s the underwriters laboratory.

Everybody has the same thing, but the simple example is when you pick up your toaster, at least in Canada, when I pick it up and I look at the bottom, it has this big logo that says CSA. What it really says is my toaster is not going to burn my house down, okay, and I can confidently use it. Same thing that the governance plane in the, the agent ecosystem provides.

It, you can lift up your agent and you can actually see that CSA logo that says it actually does what it’s supposed to. So there’s a user interfacing capability That allow that to actually occur. And that’s fundamental because if we don’t have that in place, then we haven’t got trust in the agent network.

If we haven’t got a trust framework and that’s part of it, there’s more to it. But if there isn’t a trust framework that ensures that the agent actually does what it’s supposed to. In an effective, efficient, reliable, stable way, then you’re not going to have the adoption, let alone the confidence in using the agents.

Shane: Let me just jump in there and play those two back before we get on to the last two. So, the producer plane is what we use to build agents. A bunch of templates, a bunch of toolkits, a bunch of patterns, things that means reusability to build stuff out. It’ll allow us to effectively Build the equivalent of workflows or the equivalent of microservices.

It’s the equivalent of software tools. It may be code, no code, low code. It doesn’t matter. Yeah,

Eric: I’d quibble with one word you use and that’s workflow because it implies a little bit more formality than what maybe

Shane: Okay, so it’s not workflow because workflow is a prescriptive path, and what we’re talking about is a bunch of autonomous agents that have rules about how they can and cannot engage, which we get into the governance plane.

So the governance plane is effectively the ability to set Policies for an agent to be able to view the policies that are being set, make sure those policies are enforced and monitor and measure or scorecard that those policies have been enforced. I like the term certification much more than governance actually.

Once you said that agent is certified and you’re going to be back to certification of things I use, I’m like, yeah, I’m with you. As soon as you say governance, I’m back into waffly data governance committees. Again, it’s just unfortunately that semantic term Spain. Degraded over the years. So policies, defining them, viewing them, enforcing them, measuring that actually that certification has been met.

Excellent. Right. Take me on to the last two.

Eric: Sure. The agent plane is probably the, I’m going to say it’s the most complicated one, but I’ll try and simplify it. So as we said, agents can actually come up with a, based on a simple request. I want to open up a bank account, et cetera. They come up with a task plan.

So what is it I actually have to do? Which tools do I actually have to invoke to do what’s necessary? And once I have that plan, how do I go about executing it? Now, the interesting thing about this, as I mentioned earlier, and I’ll continue with the bank open example. The very first request goes into, at least in the framework that we have, we’re going to open source it a little later on.

It goes into what I call a super orchestrator, or a super planner. And what that planner says is, based off of the inventory of agents that I have available to myself, what does the task plan look like to fulfill the request that Shane has asked in terms of bank? That plan is, I’m hoping it’s relatively deterministic.

Although I’ve been experimenting with some of our samples and we did get some two different ways to solve a problem where I thought there’s only one. But anyway, to make a long story short, it should be relatively deterministic that says if this is what the customer wants to do, that we should be able to find a set of tasks in a very repeatable fashion to actually execute that.

Those tasks can be one of two things. They can be a local tool, get some data from a database, or it could invoke another agent. And in the way we’re looking at it, an agent is just another tool. And whether that tool is used to invoke another agent or used to get access to a database, that initial orchestration or planning agent now has got a plan, and it sends a request out to the various different agents to actually address that.

Now, if I go to the identity verification, again, following through in the example, Shane wants to open a bank account. First step is identity verification. So that is another agent that says, I now need to go and do an identity verification for Shane. Here’s the policies and procedures of the organization.

So there’s a little bit of rag type capability going in there, but now this agent has access to the corporate policies. That explain how this should actually take place and which agencies I should actually invoke. And then it goes about creating a plan itself to do it. And it will invoke other agents if necessary or other tools as necessary.

So what you actually see is the agent interaction is, as you highlighted, it’s unstructured, but somewhat deterministic. It’s unscripted, but somewhat deterministic, but it’s recursive. And the agent path. Can evolve dynamically based off of what it learns and what worked in the past and whatnot. And this is actually very important to understand is the current crop of agents.

They require a more or less formal definition of what actually has to happen. A little bit more akin to workflow, although they have an LLM mixed in the middle. Workflow is the same way without the LLMs. There’s still a very prescriptive path. Now, if I have an agent actually go and do what I just mentioned, if I provide the corporate repository around policies and procedures and how it should be done, then the agent will find out the right way and the deterministic way of doing it.

It’s the same concept. Anytime without RAG or corporate information, an agent will use whatever it knows from however it was trained. But if you constrain it with corporate policies, implement it through some kind of RAG technique, then it’s going to answer the question in the way that you expect. So the power of this is every agent is, and we’ll talk about this, it’s a microservice.

It’s in a container, it’s deployed, it has some endpoints, and it has a way of interacting. It has an LLM, so we have a smart, a very smart microservice. Because it’s a microservice, I can now add enterprise grade capability, Identity Book of Record, OAuth2. So now I have a very smart enterprise grade service.

And when I add local data, access to corporate repositories, access to corporate procedures, what I have is an enterprise grade autonomous agent that knows how A and Z works. That is powerful, and that’s where the ecosystem comes into play, because as I build one or two, all of a sudden they become reusable agents.

Because they’re all in the agent inventory, and the agents can actually find them and use them, so here’s the huge epiphany. If you look at a network diagram, and the number of links are roughly commensurate with the value, If I have one, I have no value. If I have two, I have one link between them, I have two.

If I have three agents, okay, I have three links between them. And on, there’s a algorithmic term for it, but I’ll just say it, it’s exponential. So the more agents that I have in there, the power of the network becomes immense. Because I actually have smart enterprise agents that know about ANZ, or Royal Bank, or whichever bank you happen to choose.

And those things become reusable components. That is the power.

Shane: When I think about the agent plane, there’s two core underlying patterns that I think you’re applying in there. So let’s go back to that core you, you talked about ages ago, which was the agent plane and the agent is effectively planning and executing.

That’s the task it’s doing. And if we take the plan, what we’re looking at is an optimization pattern. So there are thousands of options. There’s a path that needs to be done that should be optimized for the task that the human’s about to do. Requires more than one agent. So more than one node and link, and that’s an optimization pattern.

And we know that optimization patterns or problems are the hardest statistical problem to solve. Uh, I remember years ago, somebody asked me to do, uh, an optimization plan for a school. And you go, how hard can that be? And the problem was, we had a whole lot of constraints. We had the constraint of a physical classroom.

So number of classrooms, number of students can fit in the classroom. We had the constraint of the teacher. When was the teacher available? We had the constraint of the subject. This teacher can only teach the subject and sometimes they can only teach it in this classroom. You know, middle work, that kind of thing.

And the last one was we have a bunch of students and the students had the constraint that they had to learn these subjects in this week. And actually when you look at it, that’s a horrible optimization problem because you’ve got four nodes and a whole bunch of constraints. And so if you think about that and then.

Times it by a large amount because that’s how many agents and how many tasks and how many constraints we have in an enterprise. That’s a hell of a problem to solve. And then the second pattern you talked about is what I call a contract pattern. Effectively, because the agent is a microservice, it has a contract coming in.

This is what it expects to receive for it to be able to execute. And this is what it’s going to push out. Because that’s what it delivers to the next agent who then treats that as the input for the next execution. So did I get that right? The agent plane is taking the complexity of an optimization pattern and a contract pattern and then trying to make it work in an enterprise.

Eric: Yeah, I think you got it right. I would just quibble a bit on the optimization pattern, which suggests every constraint adds an exponential level of complexity. I would call it more akin to a search pattern, in a search within a bounded context, which is what the LLMs do actually very well. And because the LLMs are actually pretty smart, the output is constrained to a particular format based off of a JSON schema, so we know what the output’s going to be.

We know the inventory of agents that are out there and every agent has a purpose and that purpose can be and usually is very detailed. So what we have is actually a constrained search problem that the LLMs have proven to be actually very effective at. So I’m not debating the fact that it’s not complex but a lot of the complexity has been hidden.

Or taken away from us as a result of the power of the LLMs today.

Shane: Okay, so I’m kind of intrigued by this one now. From an optimization point of view, what you would normally do is you’ve got a bunch of constraints, a bunch of options, and you effectively find the most effective path. So, your plan effectively becomes a manifest.

It says, this is what I need to achieve if I do this, then this, then this, then this. And then execute that, I’ll achieve my goal. What I think you just said, actually, is that’s not what happens. There’s not like a mega agent that creates the plan. An agent does its task, and then it fires and forgets, and another agent picks it up.

So, is that how it works? The

Eric: fire and forget I would debate, but here’s the thing. The first agent needs to come up with a plan for what it knows to do. It does not come up with the intergalactic plan that maps out how all the agents need to do their work. It works within the constraints that it’s given, the request.

Then it actually goes out, if using the bank account example, I simplified it, but there’s four steps in there. Okay, but that’s the scope of what that plan needs to take place. However, remember the request went to the identity verification agent. That agent then has access to the corporate repository, the known age, and it comes up with a plan that is localized for its constraints.

So what it is, it’s a decomposed optimization problem, perhaps, as opposed to a combinatorial explosion. So then that’s where the recursion part comes. It’s a little bit like searching, it’s a little bit like walking a tree, and every time you take a step in the tree, a fractal grows and it goes up. At some point, it hits a tool and some real work actually takes place and then everything revels back.

Shane: And then we can see from a Ask AI, from a chat point of view, we don’t care that it’s non deterministic as a path. We don’t care that the human is driving us to go anywhere. But when we’re in an enterprise and we have a bunch of policies, so going back to your certification plane, when we have a bunch of rules that have to be applied, Now we’ve got a problem because we want a deterministic model, but we’re effectively running a non deterministic framework.

So let’s close it out with the operating plane. Uh, and then we’ll come back and talk about now we understand how it should work. When we’re in an enterprise, there’s a hell of a lot of complexity that’s going to make this a lot harder than just writing a chatbot or an ask LLM agent.

Eric: Absolutely. Sure. The operator plane.

So this one I think is, anybody who’s been in technology for a while, this wouldn’t be really comfortable for them. The technology that runs the agent ecosystem is the same, very similar stuff that we’ve seen before and we’re very comfortable with, probably runs in the cloud somewhere, and it’s using the typical cloud type stuff, maybe.

You’re in an enterprise, maybe you’re using Kubernetes, but we understand all that type of stuff. The only piece we, we may not be as comfortable with is how do we actually manage LLMs at scale? But I would argue that enterprises are starting to figure that out with MLOps and all that. So it’s really the operator planet plane is probably the one that we’re most comfortable with.

What we’re doing is we’re just managing Kubernetes or cloud processing capability. And we’re using the tools that we’re all very comfortable with. All it is, as opposed to a VM over here or something else, we have agents, which effectively look for all intents and purposes, like any other application, for the most part.

Shane: That’s going to be agent ops or agentic ops or, or some, one of those things. So it’s the DevOps capability to manage the ecosystem without humans having to do it. So then if we go back and we go consumer playing, get it. Trying to see lots of patents out there. We can leverage producer plane, software development patents.

Yep. Yeah. Again, need to be tailored for lms, but you know, a known problem, potentially a lot of of known patents. And the operator plane, same thing. The whole movements in there, and that leaves us with the certification plane in the agent plane. And the certification plane is the age old problem of governance that we’ve never really nailed any form of policy rules based.

Engines to enforce our policies without a human. That’s an unsolved problem in my opinion. And then the agent plane is the real complexity, which is this non deterministic behavior when we want it to be deterministic in an enterprise to constrain itself within those certifications. And if I think about that, and then I think about complexity of systems and large organizations, the number of policies, the documents that have been written with the policies that nobody really understands and are never really enforced, I can see there’s actually a whole lot of complexity when you bring this to an enterprise.

If you’re not just building Ask AI, if you’re not just building a lovely chatbot with a human to have a good time, ask a question and get on with the work.

Eric: Let me suggest a few things. Anytime somebody talks about generative AI or large language models, the really bad word comes in, that’s not deterministic, two words, or undeterministic.

Everybody says that, everybody says that. And first off, I understand it. Okay. It’s a fact, but we have known remedies for that. RAG techniques, constraining the context, using an intelligent prompt that says, just use this context. There’s ways that you can get very high levels of reliability, determinism, and repeatability and reproducibility.

So it can be done. It’s not a hundred percent deterministic, but here’s the thing we’re talking about is I’m not talking about using an agent to post to a general ledger or be the only place that you can use a payment service or something like that. That’s not what we’re talking about. There’s tools to do that.

The agent will use the tools, which are the tools are a hundred percent deterministic, but here’s the thing that we’re talking about is if you come back to my bank account open, what we’re doing is talking about replacing. And I, and I’m very sensitive to this, but what it is replacing humans, which almost by definition are not deterministic, and in fact, quite undeterministic.

And in fact, if you mentioned earlier, just in passing that we have corporate documents and policies and procedures that, first off, nobody actually knows top to bottom and the checks and balances, which suggests that even if appropriate Mistakes are made. That’s why when I did this bank thing myself, people had to call me, people had to text me, people had to email me, because there was mistakes that were made.

So here’s what I would argue is the current thing that we’re talking about is not deterministic, it’s error prone. And while it’s reasonably reliable, it’s not that reliable. Okay. At least not based on the canonical process that people want to follow. So here’s what I’m saying is it may not be perfect and it may never be perfect, but it’ll be better than the alternative.

It’ll be cheaper than the alternative. It’ll be faster than the alternative. It’ll work 7. 24. Okay, so the frame of reference, the frame of competition, it needs to be adjusted to be more practical. And my proposition is, if you look at where agents are going with the performance going exponentially up, the capability going exponentially up, and the cost going exponentially down, what we’re going to find is agents that become more and more competitive.

Deterministic, more and more reliable, more and more effective and faster. Whereas I would argue that the humans that are out there are on a relatively stable trajectory. They’re not getting necessarily any better, faster, cheaper, or whatever. So at some point we’re going to cross that line where they will be much more effective.

And I’m proposing that. We ought to be preparing for that and building tools and policies and procedures so we can actually handle the inevitable challenges.

Shane: And on that note, if the agent plane is focused on making agents that are as small as possible, who are tasked with doing one thing and doing one thing well, and we daisy chain the agents together to achieve a bigger process, then that allows us to have more focus on the deterministic nature of that agent.

So at the moment, the problem is with the LLMs, they do lots and you can do anything. And therefore it’s hard to tell with the variability it’s doing. Whereas when we get down to small steps, we have a lot more ability to enforce the policies for that small step.

Eric: Yeah. There’s also, there’s ways that you can, and they’re proven by the way.

So I’ll give an example. When we do a task plan, we actually say that the task plan needs to fit this JSON schema format. It has to adhere to this, and we then provide an example of that, and that’s all in the prompt. Okay, and then we say, based on this inventory, map this request to the inventory of tools and agents that are out there, and put it in this format, and it is remarkably easy.

Deterministic, I would say remarkably reliable. It actually, given an appropriate prompt and appropriate context, JSON schemas, examples, you can actually do very well. And again, my proposition is going to get even better as we go.

Shane: Yeah. It’s interesting, actually. I think probably, uh, the term I’ll start using is repeatable.

That’s what I’m looking at. Yeah. Yeah. It’s, it’s not deterministic cause it’s never going to be a hundred percent. The same every time, it’s reliable depending on what level of reliability you want. But what you want is repeatability. You want the ability is when you do this task with that agent 10 times, it will reliably do that task within the constraints and policies you’ve given it.

So maybe repeatability engineering is the new term, that’s the new buzzword, yeah. I

Eric: think you may, that may have some legs there, my friend.

Shane: Let’s get onto this idea of enterprise grade. So we understand the five planes. We understand what each of them are and the steps or the tasks or the patterns they apply when we do this end to end process.

We understand the complexity of large organizations, technical complexity, process complexity, variability of tasks that have to happen, different levels of engagement from the consumer, from my mother to myself. to somebody else. And then we talk about enterprise grade, or you’ve talked about enterprise grade quite a lot.

So take me on that journey. What do you mean?

Eric: Sure. So first off, as I mentioned earlier, the current crop of agent tools are, and I got to give them credit, they’re tremendously innovative. They’re new, they’re pioneers. They went where nobody went before, as they say. So my critique is not meant in a pejorative way.

But they are not enterprise grade. Coming to your question, what does enterprise grade actually mean? It means that they’re going to, simplistically, they’re going to fit into a regular, normal enterprise’s operating environment and meet the regular, normal, service level expectations that they have. And that means they should be simplistically, there’s a lot more to it, but they should be discoverable, they should be observable, they should be operable, and they should be secure.

Those four things, and there’s a bunch more you can go on, but we’ll focus on those four. That’s what every organization expects. And if you can do that, then more often than not, you will be able to plug. Today, an application, if an application meets those criteria, you typically can get them into an enterprise environment.

Today, you can’t do that with agents. That is a bridge too far. Unless you chose to wrap all sorts of bespoke stuff around it. And I can go into depth as to why that’s challenging, but they have a shared memory model, it’s in one Python file, etc. That’s not how modern applications work these days. But here’s what we have learned over, I think it’s about probably 15 years at least.

We’ve been talking about APIs and services. And we’ve learned along the way that there’s a lot of patterns around how we implement microservices. If we implement microservices using those patterns, we get security, OAuth2 maybe, integration with identity books of record. Um, mutual TLS for security, I get discoverability, I get observability, and I get operability, all by plugging microservices into well known, established tools that most enterprises would actually.

That, that’s first and foremost what you need to think about. If I’m in a microservice, First off, and I add a large language model to make it smart and does in the hence, it does all the things we mentioned. What if I put it in the Docker container? Now, microservice endpoints are exposed, but if it’s in a Docker container, I can run it in Kubernetes, I can run it in the cloud, I can run it anywhere that I want.

So all of a sudden I have deployability as another. Great attribute, and they can plug into almost anybody’s DevSecOps chain. So if I can do microservices and I can do containers, okay. And I bolted on to OAuth2, so I have role based access control. I bolted on to an identity book of record. Now I can get authentication.

Okay. What I’ve got is most of the tick boxes. Oh, and if I can emit alerts to the regular consoles or whatever the case may be. I’ve ticked the boxes. For almost all the things that CISO, Chief Security Officer, would need, your Chief Architect, your CTO, your Chief Operations Officer. Those are all the things that they want, and by bundling agents, smart agents, with an LLM, and then putting them into microservices, into containers, that gets you enterprise grade capability.

That’s what enterprises need. That today you can’t get with the current, with none of them. We’ve looked at all of them. None of them provide this capability. However, people can bolt on extra stuff, bespoke, to actually do that. But I foresee a world where you’re going to have toolkits and templates that allow you to build out of the box.

And configure out of the box microservices that contain agents that are smart and enterprise grade. That’s what’s happening, that’s what we’re building actually right now with a few folks. And that is the future that I foresee. Because if you believe in the benefits that I’ve mentioned around agents, it just begs for the fact that we need to wrap enterprise capability around them.

To allow them to seamlessly integrate and be adopted in the enterprise. So when I think of enterprise grade agents, that’s what I’m talking about.

Shane: That’s interesting, isn’t it? It’s, it’s, I can see where you’re going with that one. So it’s the ability to take all the things, the infrastructure and enterprise stuff that we take for granted now in large organizations.

So logging. When something runs in a good organization, everything is logged and stored so that we can go back and see what happened, who did what, when, what and how. And so what we’re saying is if you’re deploying agents within an enterprise organization, you need to have that logging framework. Why wouldn’t you just connect in to the standard logging services and patterns, because they’re already there.

They’re proven. Why invent the wheel again? I’m kind of intrigued though, you This, this thought in my head, if we think about Kubernetes Docker as a small container to run an operating system. It was a way of decomposing or decentralizing the operating system from being a big machine to lots of small machines that we can treat like cattle.

Isn’t where we’re going that agents actually become the equivalent? We may deploy agents within a Kubernetes, but effectively. An agent is a Kubernetes instance because the boundary is, it’s a thing that does something and we want to be able to create and dispose of it when we want to. Effectively, that’s where we’re going.

Eric: Yeah, I would say the equivalent is an agent is a pod within Kubernetes and it can be stopped. It could be started when it starts, it can bootstrap, it becomes a participant in the Kubernetes network and it is available for all intents and purposes to any other pod. Obviously you’re going to want it.

Restrict that in certain cases, but that’s the analog that I’m looking at by putting in a microservice, by putting it in a container, I can put it as a pod in Kubernetes and there’s other things in Kubernetes. I can put an OpenShift, I can put it in the AWS version of that. I can put it in Google. I can put it in Azure.

I can put it just about anywhere I want. Once it’s in a container, I do foresee the pods in a Kubernetes, especially if you’re on prem, which a lot of my clients still are. Or have a significant on prem capability. It’s just another pod in the Kubernetes environment.

Shane: Yeah, I get that, and I didn’t make my thought clear enough.

We used to have physical machines, and then we got VMware, where we could create Pods, multiple pods of operating systems on that physical machine. And so my thought is, if an agent becomes a pod, yeah we may deploy it in many different technologies or many different infrastructure, platforms of service, cloud, whatever.

But the agent itself becomes the boundary. So like a Kubernetes instance is a boundary for something, actually the agent becomes the boundary. It’ll become a term for here’s an agent and you can deploy it in many different ways, but the agent is actually the boundary of the thing. It could be. Here’s

Eric: what I would say is organizations will deploy agents in a fashion that makes sense for them.

You can make them as small as you want, or you can make them as granular, as coarsely grained as you choose. I would say that’s a deployment decision, an operability decision, as opposed to anything that’s constrained or specific to agents.

Shane: Agree, but what we said was, if the smaller we can make the agent, the tighter we can define its policies, the more repeatable the response from that agent because we’ve decomposed it down into very small things that then get put together to achieve our goal.

Again, it’s down to microservices. Smaller, smaller and smaller. Do one thing well, not one microservice to rule them all, not one agent to rule them all. Okay, we’ve gone and we’ve talked about the five planes and the patterns that underline them. That makes sense. For me, we’ve identified that the certification plane or governance plane is, you know, policies is a nightmare for us.

Uh, again, I don’t think that’s the solved problem. The agent plane brings a whole lot of complexity, uh, in terms of new patterns on how we make that thing work, because we’re not doing deterministic workflows that are if then else statements. I like your definition. Uh, using an LLM to make something smart.

So a smart microservice rather than a dumb one, I kind of like that. And then when we go into the enterprise, we have the complexity of the technology stack, we have the complexity of their business processes, we have the complexity of their, their customers and their products. Uh, and then we also have the complexity of their enterprise requirements from a technology point of view, the things that we just expect out of the box now for every software that gets deployed.

And because this stuff is so new, it doesn’t exist yet. Before we close out, anything else you want to cover?

Eric: No, let me just, I’ll give my last little pitch around where I see things going. But bottom line is literally in the last, I’m going to say 8 to 12 weeks, give or take, each of the tech giants, which are effectively seven of the top 15 biggest companies in the world have announced tens of billions of dollars.

of investments towards this so called agentic future. It’s happening. It’s coming. We know that the current suite of agent toolkits are not necessarily enterprise enterprise, and there can be a bunch of people like me and others that will figure this out. We will soon have. Agents that are secure, trustworthy, manageable, discoverable, etc.

So, with that given, the next question, the second order question, is I’m not going to have one agent, I’m going to have a bunch of them. How do I actually manage these things in an ecosystem? And what are the services that that ecosystem actually needs to allow the ecosystem to grow? And there’s technical capability.

There’s operating model capability, there’s thinking about the various actors in it, producers, consumers, governance, the agent, the agent interaction, the agent plane. These are all the things that folks need to think about, but make no mistake about it, somebody will figure each of these things out. My proposition is now’s the time to prepare, and for those that are Innovative and entrepreneurial.

Now’s the time to stake your claim in the next gold rush. So what I would say though, by the way, Shane, thank you very much for having me on the show. Appreciate it. Fantastic questions. And I look forward to hopefully future dialogues again.

Shane: I think there’s some, some good, uh, clear patterns that came out of this one.

Before we close out, people want to find you. What’s the best way to find you read what you’re thinking.

Eric: Sure. Best place is, first off, go to LinkedIn. I’m a prolific poster there. Medium. com, Eric Broda. You’ll find me there. Almost all of my articles come out from there. Also, I have a book that just was recently published, Implementing Data Mesh.

So you’ll see a whole discussion on ecosystems, but my next book will be coming out with O’Reilly, entitled, Agentic Mesh, which will be starting the official writing in late December, early January, hopefully available sometime in September next year.

Shane: How much of a break between writing the first book and the second book did you end up having before you decided to gelate yourself again?

Eric: Yeah, there wasn’t a lot of time out between the two books. The interesting thing about writing the book is it takes a certain amount of months to write it. But then there’s a break where you can’t do anything. The tech reviewers get through it, you do some quick stuff, and then you actually get to the point where it’s in the publisher’s hands and that takes another six to eight weeks.

So believe it or not, my last book came out in October, but the Authoring was actually finished in the June, July timeframe. So we have had a little bit of a break and the fortunate thing about this, my next book will actually be written with my sons. My sons are also in the field. They’re actually in my company and hopefully we’ll, we’re going to be able to impart a little bit of wisdom there along the way.

Shane: That’s gonna be a very different writing experience. writing

Eric: probably will, but it’s gonna be a ton of fun. It’s gonna be a ton of fun.

Shane: Thank you very much for coming on the show. That was intriguing and I learned lots as always. So I hope everybody has a simply magical day.