#AgileDataDiscover weekly wrap No.2
TL;DR
Let’s get into it!
#AgileDataDiscover weekly wrap No.2
We explore the initial feasibility of the product idea a little more
As technologists at our core Nigel Vining and I always err towards understanding the feasibility of building something, before we work on the viability of building it.
There is joy in building something that works, compared to getting a constant “no”, or even worse a shoulder shrug, to what you think is a great idea.
To stop us boiling the ocean and spending too much time validating the feasibility of the wrong thing, we have a process where we do internal research spikes, which we call McSpikey’s, to time box the feasibility research and force us to quickly swap back to the viability.
As part of the Greenfields Data Warehouse Rebuild research spike we did for a customer, we tacked on an internal McSpikey we had been thinking about for a while. For this McSpikey we tested the LLM based on a number of different inputs. To dive into these details – read the full synopsis here (search or scroll down to 26/06/10).
Then we created a series of tailored prompts, based on the shared language we have crafted and use when we are teaching via our AgileData Courses, coaching Data and Analytics teams or helping our AgileDataNetwork partners with their Way of Working.
Last we used the LLM to produce artefacts – models, definitions, glossaries. Refer to the link above for more info.
The outputs the LLM produced were much better than we expected, the key seemed to be the quality of the shared language we used, which we have been crafting and refining for over a decade. Another interesting discovery was the report screenshots cost us twice as much to process compared to the text based inputs. That’s the value of McSpikeys, you always learn something new.
This product idea looks feasible, we should explore the viability of it some more.
Next we need to explore the initial viability of the product idea a little bit more
We really want to explore the viability of the migration to the AgileData platform use case, but we have a hypothesis that Data Governance / Enablement might be the most valuable use case. There is a growing trend for Data Governance teams to move towards being Data Enablement teams, a trend that was reinforced by an excellent presentation at the Wellington Data Governance Meetup we attended recently where somebody was playing back the key themes from the DGIQ conference in Washington DC last year.
The next step is to see if we can get some quick market validation on the value of the product supporting Data Enablement. We reached out to a couple of people I know in the Data Governance space to pick their brains.
A much more detailed breakdown of what we heard is here, but at a high level:
- Data Governance Managers are losing their BA team members as a result of the fiscal downturn, or they are losing access to those skills in other teams as a result of downsizing.
- Data Governance Managers who get a new role in a new Organisation, are faced with a standing start. If the outputs they need do not already exist, they are blocked until they can get them created.
- Starting a conversation with a blank piece of paper is always harder than starting with some known state.
- Data Policies and Procedures often exist but there is a lack of visibility of who they are being complied with.
These seem like problems we can help with.
We have quickly validated the feasibility of the product idea and we think we have quickly validated there may be a market and demand for the product.
If this new capability becomes both viable and feasible, where does it fit with our current product strategy?
We have a few simple choices:
- Treat it as a completely separate product, give it a new name, a separate pricing model and separate Go To Market strategy.
- Treat it as a separate module in our current product, give it a new main menu name, add a separate pricing model (unbundling the current pricing strategy). Retain the current GTM strategy for this module or create a new one.
- Treat it as a bunch of features in our current product, make it part of the current pricing, make it a feature in our current GTM patterns.
We are technologists at heart. We have a small team, we don’t plan to ever have a massive team, and as a result of these two things we are great at building, we are ok at selling, but we really struggle at marketing. Introducing a whole new product would mean we would need to split our limited time and skills across two products.
For that reason alone the separate product option is out, for now. This is not a decision that is set in stone, this is just a decision made quickly, so we can move onto the many decisions we need to make next.
That leaves a decision between a separate module, or just a bunch of features.
We are reluctant to make this call at the moment: we think we need more certainty before making the decision.
And that’s ok, there is so much uncertainty in everything we are doing for #AgileDataDiscover right now, we can afford to delay some decisions.
Should we experiment with our Way of Working while we do this product experiment?
We have a massive amount of uncertainty in what this product is or will be, so the last thing we should be doing is introducing additional uncertainty by changing the way we work while experimenting with the product.
When we designed our product development Way of Working a few years ago we had to design it based on a fractional development team that worked in different timezones. That led us down to a value stream flow of work that was asynchronous and chunked down into UI Design, Backend Engineering and UI Build steps.
Overtime we realised this linear flow wasn’t working for us. We were over investing in UI Design upfront, which we then discovered wasn’t feasible to build in the backend or the UI. We ended up with multiple value stream flows depending on the level of uncertainty in what we were planning to build.
The key to all of this is Nigel and I were the key users, we were the customer for our product.
We knew the data problems we wanted to solve, we had lived with them for over three decades. We were focussed on reducing the time it took us to do data work, reducing the cognition it took us to do it, or automating it completely so the work was done without us. We could measure how well we were delivering these.
For this product we are building it for somebody else, we have a high level of uncertainty, we want to do a high level of experimentation, so we might as well experiment with changing the way we work.
Let’s aggressively iterate the way we work while we do this.
Research led vs solution led
We have come up with a hypothesis that there are two different Product Value Streams an organisation can adopt: one is being research led the other is MVP or solution led.
Research led is where the organisation starts by researching the problems that exist in the world, identify problems that could be solved, ideate how they might solve those problems, and then discover whether those ideas do indeed solve it. Only after all that do they build something.
Solution led is where you start with a problem you know or believe exists, you build a minimal viable product that you think solves that problem, then you test the MVP. You are testing both problem space and the solution space and the value space all in one go.
On the AgileData substack we go into more detail on this but essentially some of the key considerations are:
- User Research vs. Market Testing: UX research perfects user experience; MVP tests market demand.
- Development Time: UX takes longer; MVP is faster.
- Risk Management: UX aligns with user needs; MVP quickly validates the market.
- Iteration Approach: UX iterates on design; MVP on product-market fit.
So which approach are we going to take for #AgileDataDiscover? Follow along as we build in public.
Keep making data simply magical
Follow along on our discovery journey
If you want to follow along as we build in public, check this out on a regular basis, we will add each days update to it so you can watch as we build and learn.