Shane Gibson and Nigel Vining completed a 30-day public experiment using a Large Language Model for legacy data warehouse discovery. They confirmed it was feasible, viable, and valuable, securing their first paying customer. They’re showcasing their progress at Big Data London in September under their new product, AgileData Disco.
Blog
Because sharing is caring
#AgileDataDiscover weekly wrap No.5
We are in the final phase of building a new product, AgileData Disco, aimed at efficiently discovering and documenting data platforms. We are exploring various Go-to-Market strategies like SLG and PLG. Pricing strategies include options like pay per output or subscription models. We are building in public to gather feedback and refine their approach.
#AgileDataDiscover weekly wrap No.4
We review feedback, highlight emerging use cases like legacy data understanding, data governance, and automated data migration. New patterns are needed for moving from prototype to MVP. Challenges include managing tokens, logging responses, and secure data handling. The GTM strategy focuses on Partner/Channel Led Growth.
#AgileDataDiscover weekly wrap No.3
We focus on developing features such as secure sign-in, file upload, data security, and access to Google’s LLM. Challenges include improving the menu system and separating outputs into distinct screens for clarity. Feedback drives their iterative improvements.
#AgileDataDiscover weekly wrap No.2
We discuss the ongoing development of a new product idea, emphasising feasibility and viability through internal research (“McSpikeys”). Initial tests using LLMs have been promising, but strategic decisions lie ahead regarding its integration. The team grapples with market validation and adjusting their workflow for optimal experimentation.
#AgileDataDiscover weekly wrap No.1
We are tackling challenges in migrating legacy data platforms by automating data discovery and migration to reduce costs significantly. Our approach includes using core data patterns and employing tools like Google Gemini for comparative analysis. The aim is to streamline data handling and enable collaborative governance in organisations. Follow their public build journey for updates.
We are working on something new at AgileData, follow us as we build it in public
The AgileData team is dedicating 30 days to exploring a novel data use case, which might lead to a new product, feature set, or module. They’ll document their daily progress publicly to share learnings and insights. Follow their journey on their blog for updates as they build and experiment in real-time.
Introducing Hai, AgileData 2024 Data Intern
I’m Hai, a name that intriguingly means “hi” in English. Originally from Vietnam, I now find myself in Australia, studying Data Science and embracing an internship at AgileData.io. This journey is not just about academic growth but also about applying my knowledge in practical, impactful ways. Join me as I explore the blend of technology and community, aiming to make a difference through data.
Defining self-service data
Everybody wants self service data, but what do they really mean when they say that.
If we gave them access to a set of highly nested JSON data, and say “help your self”, would that be what they expect?
Or do they expect self service to be able to get information without asking a person to get it for them.
Or are they expecting something in between.
I ask them which of the five simple self service patterns they want to find, which form of self service they are after.
There are 3 strategic / macro data use cases
I often ask which of these three macro data use cases the Organisations believed were its priorities to achieve their business strategy:
Providing data to Customers
Supporting Internal Processes
Providing data to External Organisations
Each of these three strategic / macro data use cases come with specific data architectures, data work and also impact the context of how you would design your agile data ways of working.
Eventually the data maintenance Tortoise will catch the new data work Hare
When you work in a data team you have to split your time between building and delivering new Information Products and maintaining the ones you have already delivered.
DataOps patterns can help reduce the time you spend on the maintenance work.
Building the Data Plane while flying it
In the data domain you typically have to balance between building the right thing and building the thing right.
The days of being able to spend 6 months or a year on “Sprint Zero” creating your data platform have gone.
One team I worked with called it “building the airplane as you fly it”
Here are 5 patterns I have seen data teams adopt to help them do this.
2024 the year of the Intelligent Data Platform
AI was the buzzword for 2023 and it will continue to be the buzzword for 2024.
I have been thinking about our approach to AI in our product for a while and landed on 3 patterns that I use as a reference.
Ask AI
Assisted AI
Automated AI
Adopting these patterns moves a data platform from being a manual data platform, towards a data platform that can do some of the data work for you.
An Intelligent Data Platform.
The 3 patterns of AgileData AI
Having AI embedded in your product have become table stakes it seems.
I have been thinking about our approach to AI in our product for a while and landed on 3 patterns that I use as a reference.
Ask AI
Assisted AI
Automated AI
Demystifying CDP’s vs. Data Warehouse’s
In this article we describe the concepts of Customer Data Platforms (CDP) versus Data Warehouses.
The magic of DocOps
TD:LR Patterns like DocOps provide massive value by increasing collaboration across team members and automating manual tasks. But it still requires a high level of technical skills to work in a DocOps way. For the...
Iterations create milestone dates, milestone dates force trade off decisions to be made
Data teams struggle to not “boil the ocean” when doing data work.
Use milestones as a pattern to help the data team to focus on what really needs to be built and manage the trade-off decisions for what doesn’t.
Your data team are mercenaries, define your ways of working based on this
Modern data teams are transient, often staying less than 5 years, unlike past decades of long-term loyalty.
Companies should adapt by defining robust Ways of Working (WoW) that endure beyond individual tenures.
Balancing in-house teams with reliable data vendors for continuity and efficiency may also be a useful pattern as part of your WoW.
I’m getting pedantic about semantics
TD:LR Having a shared language is important to help a data team create their shared ways of working. When we talk about self-service, we should always highlight which self-service pattern we are talking about.I'm...
The Art of Data: Visualisation vs Storytelling
Data visualization is like painting with data, using charts and graphs to make trends and patterns easy to understand. It’s great for presenting data objectively.
Data storytelling weaves a narrative around data, adding context, engaging emotions, and inspiring action. It’s perfect for persuading stakeholders.
Most of your data is rotten and it’s not your fault, but it is your problem
Data quality is an expectation, not an exception.
While data quality is crucial, it’s not always directly our fault when issues arise, nevertheless, it remains our problem to solve.
Data Contracts are one pattern that can help us solve this problem.
Are you delivering drills, holes or outcomes?
TD:LR Whether you're a Data Entrepreneur or an organisation looking for actionable insights, its the business outcome these insights help you achieve that is the most important thing. Yes you need a data platform and...
Data Asset, Data Product, Data Service?
TD:LR Should we treat data as an Asset, a Product, a Service or a hybrid combination of all three? Data Asset, Data Product, Data Service? There has been a lot of discussions on LinkedIn, lots of podcasts, lots of...
Demystifying the Semantic Layer
The semantic layer is your mystical bridge between complex data and meaningful business insights. It acts as a translator, converting technical data into a language you understand. It works through metadata, simplifying queries, promoting consistency, and enabling self-service analytics. This layer fosters collaboration, empowers customization, and adapts to changes seamlessly. With the semantic layer’s power, you can decipher data mysteries, conjure insights, and make decisions with wizard-like precision. Embrace this enchanting tool and let it elevate your data sorcery to new heights.
Understanding Concepts, Details, and Events: The Fundamental Building Blocks of AgileData Design
Reducing the complexity and effort to manage data is at the core of what we do. We love bringing magical UX to the data domain as we do this.
Every time we add a new capability or feature to the AgileData App or AgileData Platform, we think how could we just remove the need for a Data Magician to do that task at all?
That magic is not always possible in the first, or even the third iteration of those features.
Our AgileData App UX Capability Maturity Model helps us to keep that “magic sorting hat” goal at the top of our mind, every time we add a new thing.
This post outlines what that maturity model is and how we apply it.
Upgrading Python: A Plumbing Adventure in the Google Stack
In the ever-evolving world of AgileData DataOps, it was time to upgrade the Python version that powers the AgileData Platform.
We utilise micro-services patterns throughout the AgileData Platform and a bunch of Google Cloud Services. The upgrade could have gone well, or caused no end of problems.
Read more on our exciting plumbing journey.
AgileData App UX Capability Maturity Model
Reducing the complexity and effort to manage data is at the core of what we do. We love bringing magical UX to the data domain as we do this.
Every time we add a new capability or feature to the AgileData App or AgileData Platform, we think how could we just remove the need for a Data Magician to do that task at all?
That magic is not always possible in the first, or even the third iteration of those features.
Our AgileData App UX Capability Maturity Model helps us to keep that “magic sorting hat” goal at the top of our mind, every time we add a new thing.
This post outlines what that maturity model is and how we apply it.
Unveiling the Magic of Change Data Collection Patterns: Exploring Full Snapshot, Delta, CDC, and Event-Based Approaches
Change data collection patterns are like magical lenses that allow you to track data changes. The full snapshot pattern captures complete data at specific intervals for historical analysis. The delta pattern records only changes between snapshots to save storage. CDC captures real-time changes for data integration and synchronization. The event-based pattern tracks data changes triggered by specific events. Each pattern has unique benefits and use cases. Choose the right approach based on your data needs and become a data magician who stays up-to-date with real-time data insights!
The challenge of parsing files from the wild
In this instalment of the AgileData DataOps series, we’re exploring how we handle the challenges of parsing files from the wild. To ensure clean and well-structured data, each file goes through several checks and processes, similar to a water treatment plant. These steps include checking for previously seen files, looking for matching schema files, queuing the file, and parsing it. If a file fails to load, we have procedures in place to retry loading or notify errors for later resolution. This rigorous data processing ensures smooth and efficient data flow.
The Magic of Customer Segmentation: Unlocking Personalised Experiences for Customers
Customer segmentation is the magical process of dividing your customers into distinct groups based on their characteristics, preferences, and needs. By understanding these segments, you can tailor your marketing strategies, optimize resource allocation, and maximize customer lifetime value. To unleash your customer segmentation magic, define your objectives, gather and analyze relevant data, identify key criteria, create distinct segments, profile each segment, tailor your strategies, and continuously evaluate and refine. Embrace the power of customer segmentation and create personalised experiences that enchant your customers and drive business success.
Fast Answers at Your Fingertips: Unveiling AgileData’s ‘Ask a Quick Question’ Feature
Immerse yourself in the magical world of data with AgileData’s ‘Ask a Quick Question’ capability. Perfectly designed for data analysts and business analysts who need to swiftly extract insights from data, this capability facilitates quick data queries and rapid exploratory data analysis.
The Hitchhikers guide to the Information Product Canvas
TD:LR In mid 2023 I was lucky enough to present at The Knowledge Gap on the Information Product Canvas. Watch The Information Product Canvas, is an innovative pattern designed to capture data requirements visually and...
Magical plumbing for effective change dates
We discuss how to handle change data in a hands-off filedrop process. We use the ingestion timestamp as a simple proxy for the effective date of each record, allowing us to version each day’s data. For files with multiple change records, we scan all columns to identify and rank potential effective date columns. We then pass this information to an automated rule, ensuring it gets applied as we load the data. This process enables us to efficiently handle change data, track data flow, and manage multiple changes in an automated way.
Unveiling the Secrets of Data Quality Metrics for Data Magicians: Ensuring Data Warehouse Excellence
Data quality metrics are crucial indicators in a data warehouse that measure the accuracy, completeness, consistency, timeliness, and uniqueness of data. These metrics help organisations ensure their data is reliable and fit for use, thus driving effective decision-making and analytics
Amplifying Your Data’s Value with Business Context
The AgileData Context feature enhances data understanding, facilitates effective decision-making, and preserves corporate knowledge by adding essential business context to data. This feature streamlines communication, improves data governance, and ultimately, maximises the value of your data, making it a powerful asset for your business.
New Google Cloud feature to Optimise BigQuery Costs
This blog explores AgileData’s use of Google Cloud, specifically its BigQuery service, for cost-effective data handling. As a bootstrapped startup, AgileData incorporates data storage and compute costs into its SaaS subscription, protecting customers from unexpected bills. We constantly seek ways to minimise costs, utilising new Google tools for cost-saving recommendations. We argue that the efficiency and value of Google Cloud make it a preferable choice over other cloud analytic database options.
Data as a First-Class Citizen: Empowering Data Magicians
Data as a first-class citizen recognizes the value and importance of data in decision-making. It empowers data magicians by integrating data into the decision-making process, ensuring accessibility and availability, prioritising data quality and governance, and fostering a data-centric mindset.
To whitelabel or not to whitelabel
Are you wrestling with the concept of whitelabelling your product? We at AgileData have been there. We discuss our journey through the decision-making process, where we grappled with the thought of our painstakingly crafted product being rebranded by another company.
Metadata-Driven Data Pipelines: The Secret Behind Data Magicians’ Greatest Tricks
Metadata-driven data pipelines are the secret behind seamless data flows, empowering data magicians to create adaptable, scalable, and evolving data management systems. Leveraging metadata, these pipelines are dynamic, flexible, and automated, allowing for easy handling of changing data sources, formats, and requirements without manual intervention.
Data Consulting Patterns with Joe Reis
Dive into the world of data consulting with Shane Gibson and Joe Reis on the Agile Data Podcast. Explore their journey from traditional employment to successful data consulting, covering client acquisition, business models, financial management, reputation, sales strategies, employee management, and work-life balance.
The Enchanting World of Data Modeling: Conceptual, Logical, and Physical Spells Unraveled
Data modeling is a crucial process that involves creating shared understanding of data and its relationships. The three primary data model patterns are conceptual, logical, and physical. The conceptual data model provides a high-level overview of the data landscape, the logical data model delves deeper into data structures and relationships, and the physical data model translates the logical model into a database-specific schema. Understanding and effectively using these data models is essential for business analysts and data analysts, create efficient, well-organised data ecosystems.
Shane Gibson – Making Data Modeling Accessible
TD:LR Early in 2023 I was lucky enough to talk to Joe Reis on the Joe Reis Show to discuss how to make data modeling more accessible, why the world's moved past traditional data modeling and more. Listen to the episode...
AgileData Cost Comparison
AgileData reduces the cost of your data team and your data platform.
In this article we provide examples of those costs savings.
Cloud Analytics Databases: The Magical Realm for Data
Cloud Analytics Databases provide flexible, high-performance, cost-effective, and secure solution for storing and analysing large amounts of data. These databases promote collaboration and offer various choices, such as Snowflake, Google BigQuery, Amazon Redshift, and Azure Synapse Analytics, each with its unique features and ecosystem integrations.
Data Warehouse Technology Essentials: The Magical Components Every Data Magician Needs
The key components of a successful data warehouse technology capability include data sources, data integration, data storage, metadata, data marts, data query and reporting tools, data warehouse management, and data security.
Unveiling the Definition of Data Warehouses: Looking into Bill Inmon’s Magicians Top Hat
In a nutshell, a data warehouse, as defined by Bill Inmon, is a subject-oriented, integrated, time-variant, and non-volatile collection of data that supports decision-making processes. It helps data magicians, like business and data analysts, make better-informed decisions, save time, enhance collaboration, and improve business intelligence. To choose the right data warehouse technology, consider your data needs, budget, compatibility with existing tools, scalability, and real-world user experiences.
Martech – The Technologies Behind the Marketing Analytics Stack: A Guide for Data Magicians
Explore the MarTech stack based on two different patterns: marketing application and data platform. The marketing application pattern focuses on tools for content management, email marketing, CRM, social media, and more, while the data platform pattern emphasises data collection, integration, storage, analytics, and advanced technologies. By understanding both perspectives, you can build a comprehensive martech stack that efficiently integrates marketing efforts and harnesses the power of data to drive better results.
Anatomy of a Data Product
A graphical overview of the components required for a Data Product
Unveiling the Magic of Data Clean Rooms: Your Data Privacy Magicians
Data clean rooms are secure environments that enable organisations to process, analyse, and share sensitive data while maintaining privacy and security. They use data anonymization, access control, data usage policies, security measures, and auditing to ensure compliance with privacy regulations, making them indispensable for industries like healthcare, finance, and marketing.
Free Google Analytics 4 (GA4) online courses
TD:LR There is some great free course content to help you upskill in Google Analytics 4 (GA4) Here are the ones we recomend.Discover the Next Generation of Google Analytics Find out how the latest generation of Google...
5E’s
As Data Consultants your customers are buying and outcome based on one of these patterns – effort, expertise, experience or efficiency.
We outline what each of these are, how they are different to each other and how to charge for delivering them.
Agile-tecture Information Factory
Defining a Data Architecture is a key pattern when working in the data domain.
Its always tempting to boil the ocean when defining yours, don’t!
And once you have defined your data architecture, find a way to articulate and share it with simplicity.
Here is how we articulate the AgileData Data Agile-tecture.
Data Architecture as a Service (DAaaS)
TD:LR Data Architecture as a Service (DAaaS), is it Buzzwashing or not? As is often the case, it depends on your point of view. Our point of view? Nope its a real thing.
Myth: using the cloud for your data warehouse is expensive
TD:LR Cloud Data Platforms promise you the magic of storing your data and unlimited elastic compute for cents. Is it too good to be true? Yes AND No. You can run a cloud platform for a low low cost, but its will take...
Observability, Tick
TD:LR Data observability is not something new, its a set of features every data platform should have to get the data jobs done. Observability is crucial as you scale Observability is very on trend right now. It feels...
DataOps: The Magic Wand for Data Magicians
DataOps is a magical approach to data management, combining Agile, DevOps, and Lean Manufacturing principles. It fosters collaboration, agility, automation, continuous integration and delivery, and quality control. This empowers data magicians like you to work more efficiently, adapt to changing business requirements, and deliver high-quality, data-driven insights with confidence.
The language of data is not so natural
TD:LR The dream is we can just point the machine at our data, ask our question and get a useful answer. With ChatGPT we are closer than we have ever been, but we are not there yet,When Nigel and I first started...
Build Data Products Without A Data Team Using AgileData
TD:LR Late in 2022 I was lucky enough to talk to Tobias Macey on the Data Engineering podcast about our AgileData SaaS product and our focus on enabling analysts to do the data work without having to rely on a team of...
How To Bring Agile Practices To Your Data Projects
TD:LR Late in 2022 I was lucky enough to talk to Tobias Macey on the Data Engineering podcast about combining agile patterns and practises with those from the data domain. Listen to the episode or read the transcript....
App Engine and Socket.IO
We wanted to be able to dynamically notify Data Magicians when a task had completed, without them having to refresh their browser screen constantly. Implementing websockets allowed us to achieve this.
I can write a bit of code faster
TD:LR To get data tasks done involves a lot more than just bashing out a few lines of code to get the data into a format that you can give it to your stakeholder/customer. Unless of course it really is a one off and...
The Focus Podcast – Agile Data Governance Patterns
Early in 2022 Shane Gibson was lucky enough to talk to the Focus podcast crew about agile governance in the data domain. Watch or listen to the episode.
ELT without persisted watermarks ? not a problem
We no longer need to manually track the state of a table, when it was created, when it was updated, which data pipeline last touched it …. all these data points are available by doing a simple call to the logging and bigquery api. Under the covers the google cloud platform is already tracking everything we need … every insert, update, delete, create, load, drop, alter is being captured
Three Agile Testing Methods – TDD, ATDD and BDD
In the word of agile, there are three common testing techniques that can be used to improve our testing practices and to assist with enabling automated testing.
Using a manifest concept to run data pipelines
TD:LR … you don’t always need to use DAGs to orchestrate Previously we talked about how we use an ephemeral Serverless architecture based on Google Cloud Functions and Google PubSub Messaging to run our customer data...
“Serverless” Data Processing
TD:LR When we dreamed up AgileData and started white-boarding ideas around architecture, one of the patterns we were adamant that we would leverage, would be Serverless. This posts explains why we were adamant and what...
A Data Engineer an Agile Coach and a Fish walk into a bar…
This is the first of a series of articles detailing how we built a platform to make data fun and remove complexity for our users
Analysts can model democratising data modeling
In 2022 Shane Gibson was lucky enough to present “Analysts can model democratising data modeling” at the Knowledge Gap Conference
Watch the presentation.
Data Mesh Podcast – Finding useful and repeatable patterns for data
TD:LR I talk to Scott Hirleman on the Data Mesh Radio podcast on my thoughts on Data Mesh and the need for resuable patterns in the data & analytics domain. My opinion on Data Mesh I am not a fan of the current...
The Enchanting World of Data Magicians: Marketing Analytics vs. Product Analytics
Marketing Analytics involves analysing data from various channels, such as social media, email, and websites, to assess the performance of marketing efforts.
Product Analytics focuses on understanding and improving user experience and satisfaction with digital products or services.
What is Data Lineage?
TD:LR AgileData mission is to reduce the complexity of managing data. In the modern data world there are many capability categories, each with their own specialised terms, technologies and three letter acronyms. We...
Data Mesh 4.0.4
TD:LR Data Mesh 4.0.4 is only available for a very short time. please ensure you scroll to the bottom of the article to understand the temporal nature of the Data Mesh 4.0.4 approach.This article was published on 1st...
Abracadabra! Unravel the Mysteries of Data Catalog
Data catalogs are comprehensive inventories of an organisations data assets, helping data analysts and information consumers to quickly find, understand, and utilise relevant information. They foster collaboration, maintain data governance, and ensure compliance.
Catalog & Cocktails Podcast – agile in the data domain
Early in 2022 Shane Gibson was lucky enought to talk to the Catalog and Cocktails podcast crew about agile in the data domain. Watch or listen to the episode.
Whats the hottest new data thing in 2022 — Data Mesh or Metric Store
There is a lot of vendor washing going on A lot of data vendors are vendor washing their technologies to pretend they enable "Data Mesh" as they are punting on Data Mesh being the new thing for 2022. I think they are...
Data Observability Uncovered: A Magical Lens for Data Magicians
Data observability provides comprehensive visibility into the health, quality, and reliability of your data ecosystem. It dives deeper than traditional monitoring, examining the actual data flowing through your pipelines. With tools like data lineage tracking, data quality metrics, and anomaly detection, data observability helps data magicians quickly detect and diagnose issues, ensuring accurate, reliable data-driven decisions.
AgileData >>> Modern Data Stack
TD:LR AgileData's mission is to reduce the complexity of managing data. A large part of modern data complexity is selecting, implementing and maintaining a raft of different technologies to provide your "Modern Data...
A selection of practical agile patterns when using Data Vault
In 2021 Shane Gibson was lucky enough to present “A selection of practical agile patterns when using Data Vault” at the Knowledge Gap Conference
Agile DataOps
TD:LR Agile DataOps is where we combine the processes and technologies from DataOps with a new agile way of working, to reduce the time taken and increase the value of the data we provide to our customers What's in a...
The “Killer” Feature
One feature to rule them all As product managers we are always looking for the next “killer feature” for our product. You know the one, that feature that will become the magical thing that will have customers flooding...
3 types of product features
Our UX/UI journey is accelerating We are currently full steam into the development of the initial User Interface for AgileData.io. The team have done some awesome work on the UX designs for a bunch of the core screens,...
Reducing Manual Effort, Everywhere, Every-time
Some tasks seem really small and only take minutes, but multiply that effort by completing that task a hundred times and you have found a task that should be automated. Collect your data In AgileData we automate the...
Why assumptions are just that
When we first sketched out our plans for AgileData we were pretty clear what AgileData would do and what it wouldn’t do. Those assumptions didn’t last long. Combine with magic We knew we wanted to focus on what we call...
Micro Actions, ensuring a little bit of Magic Happens Here everytime
We are currently doing some work on creating rule patterns that enable us to automagically find duplicate Concept values and create a master view of them. For example creating a master view of Customers, or a master...
Buy, Build or Lease
One of the (many) things we needed to decide when we started to build out the AgileData Minimal Magical Product (MMP), was which capabilities we would build vs which capabilities we would lease or buy. As part of our...
What problem(s) does AgileData solve?
This should be an easy one for me to answer right? Understand the customers problem first We get told in startup land that you need to understand the customers problem in-depth, before you start building your product....
Why we chose Google Cloud as the infrastructure platform for AgileData
Pick a few things that really matter, not thousands of “requirements” We when first started developing the core of the AgileData backend for the MVP, we knew we would need a cloud database to store...
Why we founded AgileData
My co-founder Nigel and I have been working in the data and analytics domain for over 30 years (well I have, he is slightly younger). We have both held multiple roles through these years, Nigel primarily in the...
Google BigQuery – Online Training resources
As part of our journey to exploring the use of the Google Cloud Platform (GCP) as the core of our infrastructure for AgileData.io we have had to start unlearning a lot of our AWS and Microsoft Azure knowledge and...
Its 2019 – Isn’t the Data Warehouse dead?
With the advent of self-service data discovery tools and big data platforms, the bold announcements of "the data warehouse is dead" started to ring out around the world. So why, you would ask, are we building a new...
AgileData App
Explore AgileData features, updates, and tips
Network
Learn about consulting practises and good patterns for data focused consultancies
DataOps
Learn from our DataOps expertise, covering essential concepts, patterns, and tools
Data and Analytics
Unlock the power of data and analytics with expert guidance
Google Cloud
Imparting knowledge on Google Cloud's capabilities and its role in data-driven workflows
Journey
Explore real-life stories of our challenges, and lessons learned
Product Management
Enrich your product management skills with practical patterns
What Is
Describing data and analytics concepts, terms, and technologies to enable better understanding
Resources
Valuable resources to support your growth in the agile, and data and analytics domains
AgileData Podcast
Discussing combining agile, product and data patterns.
No Nonsense Agile Podcast
Discussing agile and product ways of working.
App Videos
Explore videos to better understand the AgileData App's features and capabilities.
Subscribe to our newsletter
We will email you once a fortnight, no spam, pinky promise
Let me read it first