Blogs
Because sharing is caring
Can we use an Information Product Canvas image to start the data design process?
Can completed Information Product Canvas image help with the initial design of the data environment to deliver that Information Product?
Overall the Disco outputs were not good enough to be used as a data design (not sure it ever will be perfect), so some more iteration to be done.
But the output it did generate was an encouraging start.
DataOps Patterns with Chris Bergh
Join Shane Gibson as he chats with Chris Bergh on improving your teams way of working by using DataOps patterns.
What is a Fractional role and why should you care?
A fractional team or a fractional role is one where an organisation gets access to experienced professionals who are hired on a part time or as needed basis.
Fractional professionals typically work across multiple organisations at the same time, providing their expertise where and when it is needed most.
This fractional pattern allows organisations to get access to skills and capabilities that they cannot find, or afford to hire as permanent members of their team.
Merging Data Vault and Medallion Architecture Patterns with Patrick Cuba
Join Shane Gibson as he chats with Patrick Cuba on combining the Data Vault data modeling pattern with the Medallion Architecture pattern.
Agentic Mesh Ecosystem Patterns with Eric Broda
Join Shane Gibson as he chats with Eric Broda on the patterns required to create an ecosystem to support the use of Agents in enterprise organisations.
AgileData Feature #03 – Catalog Browse and Search
Data Persona’s can use the Catalog to quickly search, find and access the Data Assets (Tiles) they need to complete a data task, such as supporting the development of Information Products or using the data for analysis.
Topic Tags and Dynamic Filtering allows Data Assets to be easily found using the Organisations business terminology.
With the Menu Anywhere capability they can drill out to any other screen in the AgileData App, retaining the context of the Tile, removing the need to go to the screen and manually searching for that Data Asset again.
Any time a data user performs a data task in the AgileData App, or an automated data task is executed on the AgileData Platform, the Catalog is automagically updated. The Catalog data is treated as a first class citizen, not as data exhaust.
The Catalog is part of a wider ensemble of features that let them search or browse to locate essential Data Assets quickly, view comprehensive details for each set of data, and track the flow of data similarly to parcel tracking.
Reliability Engineering of AI Agents with Petr Pascenko
Join Shane Gibson as he chats with Petr Pascenko on the pattern of Reliability Engineering of AI Agents
One suggestion on starting your journey into becoming a data consultant
At #AgileData we are focussed on helping people become data entrepreneurs.
Whether they want to start their journey by side hustling, or becoming a one person data band or grow their small data consulting companies revenue and margin, but not their team, we are keen to be part of their journey.
AgileData Feature #02 – Single Sign-in
With Single Sign-On (SSO), users gain streamlined access to the AgileData App, eliminating the need for repeatedly logging in and reducing friction in their data day.
This feature leverages Google Identity and Access Management (IAM) to provide seamless and secure authentication, ensuring that users can easily connect to the platform using their existing credentials.
AgileData’s SSO is not only secure but also scalable, designed to meet the needs of growing teams while maintaining the highest standards of data protection and access control.
AgileData App
Explore AgileData features, updates, and tips
Network
Learn about consulting practises and good patterns for data focused consultancies
DataOps
Learn from our DataOps expertise, covering essential concepts, patterns, and tools
Data and Analytics
Unlock the power of data and analytics with expert guidance
Google Cloud
Imparting knowledge on Google Cloud's capabilities and its role in data-driven workflows
Journey
Explore real-life stories of our challenges, and lessons learned
Product Management
Enrich your product management skills with practical patterns
What Is
Describing data and analytics concepts, terms, and technologies to enable better understanding
Resources
Valuable resources to support your growth in the agile, and data and analytics domains
AgileData Podcast
Discussing combining agile, product and data patterns.
No Nonsense Agile Podcast
Discussing agile and product ways of working.
App Videos
Explore videos to better understand the AgileData App's features and capabilities.
5 core Data Collection Patterns
At AgileData, delivering our Fractional Data Service has revealed the diverse challenges of integrating data from varied organisations, industries, and systems. To scale effectively, we’ve adopted five core data collection patterns based on our “Define it Once, Reuse it Often” (DORO) principle:
1. Push
2. Pull
3. Stream
4. Share
5. File Drop
These patterns are supported by a toolkit of tested technologies like Dataddo, Meltano, and Google services, allowing us to solve new data challenges quickly. Our approach ensures flexibility and scalability, always starting with the question: Push, Pull, Stream, Share, or File Drop?
An Experiment – Top Data Trends for 2025 with Coalesce and Google NotebookLM
Join two LLM generated guests as they discuss the Top Data Trends of 2025 Whitepaper published by Coalesce.
This is a different episode. Instead of a human guest, we have two robot guests.
I decided to try and experiment. My experiment was, can I upload a white paper to LLM, have it generate a podcast listen to that podcast in my daily walk and see whether that summary removes the need for me to actually read the white paper.
So in this case, I have grabbed a white paper called Top Data Trends for 2025 from Coalesce, uploaded it to the Google Notebook LLM and got it to generate a podcast with two hosts chatting about the white paper.
Have a listen, let me know what you think.
AgileData Feature #01 – Marketplace
Information Consumers can quickly search, find and access the Information Products they need to answer their business questions.
This feature enables Information Consumers to search and find all the available dashboards, reports and analytical models in the AgileData App, regardless of what third party Last Mile tool they are created in.
It also enables the Information Consumer to quickly open that report or dashboard directly from the Marketplace, when those reports are accessible by a web URL
Data Contracts with Andrew Jones
Join Shane Gibson as he chats with Andrew Jones on the pattern of Data Contracts
NZ Scaleup AgileData achieves Google Cloud Ready – BigQuery Designation
AgileData has achieved Google Cloud Ready – BigQuery designation, streamlining data management for customers and partners. This certification confirms the integration’s functionality and reliability, reducing complexity through a low-code interface. By leveraging Google Cloud’s infrastructure and BigQuery, AgileData empowers business leaders to rapidly gain insights and make informed decisions efficiently.
The last (for now) of our #AgileDataDiscover summaries!
Shane Gibson and Nigel Vining completed a 30-day public experiment using a Large Language Model for legacy data warehouse discovery. They confirmed it was feasible, viable, and valuable, securing their first paying customer. They’re showcasing their progress at Big Data London in September under their new product, AgileData Disco.
#AgileDataDiscover weekly wrap No.5
We are in the final phase of building a new product, AgileData Disco, aimed at efficiently discovering and documenting data platforms. We are exploring various Go-to-Market strategies like SLG and PLG. Pricing strategies include options like pay per output or subscription models. We are building in public to gather feedback and refine their approach.
#AgileDataDiscover weekly wrap No.4
We review feedback, highlight emerging use cases like legacy data understanding, data governance, and automated data migration. New patterns are needed for moving from prototype to MVP. Challenges include managing tokens, logging responses, and secure data handling. The GTM strategy focuses on Partner/Channel Led Growth.
AI Data Agents with Joe Reis
Join Shane Gibson as he chats with Joe Reis on how the potential adoption of GenAI and LLM’s in the way Data teams work.
#AgileDataDiscover weekly wrap No.3
We focus on developing features such as secure sign-in, file upload, data security, and access to Google’s LLM. Challenges include improving the menu system and separating outputs into distinct screens for clarity. Feedback drives their iterative improvements.
#AgileDataDiscover weekly wrap No.2
We discuss the ongoing development of a new product idea, emphasising feasibility and viability through internal research (“McSpikeys”). Initial tests using LLMs have been promising, but strategic decisions lie ahead regarding its integration. The team grapples with market validation and adjusting their workflow for optimal experimentation.
#AgileDataDiscover weekly wrap No.1
We are tackling challenges in migrating legacy data platforms by automating data discovery and migration to reduce costs significantly. Our approach includes using core data patterns and employing tools like Google Gemini for comparative analysis. The aim is to streamline data handling and enable collaborative governance in organisations. Follow their public build journey for updates.
We are working on something new at AgileData, follow us as we build it in public
The AgileData team is dedicating 30 days to exploring a novel data use case, which might lead to a new product, feature set, or module. They’ll document their daily progress publicly to share learnings and insights. Follow their journey on their blog for updates as they build and experiment in real-time.
Bridging Data and Product Management Practises and Patterns with Juha Korpela
Join Shane Gibson as he chats with Juha Korpela on how to adopt patterns and practises from Product Management and apply them to the data domain.
Introducing Hai, AgileData 2024 Data Intern
I’m Hai, a name that intriguingly means “hi” in English. Originally from Vietnam, I now find myself in Australia, studying Data Science and embracing an internship at AgileData.io. This journey is not just about academic growth but also about applying my knowledge in practical, impactful ways. Join me as I explore the blend of technology and community, aiming to make a difference through data.
Patterns for being a successful internal data consultancy with Dylan Jones
Join Shane Gibson as he chats with Dylan Jones on how to adopt patterns used by successful data consultancies and apply them in your organisation as an internal data team.
Defining self-service data
Everybody wants self service data, but what do they really mean when they say that.
If we gave them access to a set of highly nested JSON data, and say “help your self”, would that be what they expect?
Or do they expect self service to be able to get information without asking a person to get it for them.
Or are they expecting something in between.
I ask them which of the five simple self service patterns they want to find, which form of self service they are after.
Combining agile and data five years on with Blair Tempero
Guests Blair TemperoShane GibsonResources Join Shane Gibson as he chats with Blair Tempero on that last five years since they started the AgileData Podcast together.Listen on your favourite Podcast Platform |...
There are 3 strategic / macro data use cases
I often ask which of these three macro data use cases the Organisations believed were its priorities to achieve their business strategy:
Providing data to Customers
Supporting Internal Processes
Providing data to External Organisations
Each of these three strategic / macro data use cases come with specific data architectures, data work and also impact the context of how you would design your agile data ways of working.
Eventually the data maintenance Tortoise will catch the new data work Hare
When you work in a data team you have to split your time between building and delivering new Information Products and maintaining the ones you have already delivered.
DataOps patterns can help reduce the time you spend on the maintenance work.
Building the Data Plane while flying it
In the data domain you typically have to balance between building the right thing and building the thing right.
The days of being able to spend 6 months or a year on “Sprint Zero” creating your data platform have gone.
One team I worked with called it “building the airplane as you fly it”
Here are 5 patterns I have seen data teams adopt to help them do this.