From ChatGPT to Production AI Agents: The DI AI Foundry Story
How Design Industries built an AI practice by doing the work first — and compressed two years of learning into systematic frameworks for enterprise AI adoption.
It Started with a Simple Question
In January 2024, we asked ourselves a question that every technology services company was asking: what does AI actually mean for our business?
Most companies answered that question with a strategy document. Some formed a steering committee or an AI working group to evaluate options and build a business case. Others are still answering it: two years later, a surprising number of organisations haven't started at all. No pilot. No experimentation. Nothing beyond a few people using ChatGPT on their phones during lunch.
We took a different path. We opened ChatGPT accounts for the team members who were already leaning in, and told them to start using it. No playbook. No governance framework. Just hands on keyboards, figuring out what worked and what didn't.
That decision, to learn by doing rather than planning to learn, became the founding principle of DI AI Foundry.
Like a Formula One team that gains competitive advantage through early track time, we understood that AI expertise would come from doing the work, not just studying it.
"While some organisations were still writing their AI strategy, we were already learning what worked and what didn't."
The AI4DI Program
We called it AI4DI. Starting in early 2024, we gave ChatGPT accounts to the team members who wanted them. Not everyone, just the people who were already curious, already interested, already seeing where this was heading. The brief was simple: use these in your actual work. Not in a sandbox. Not in an "innovation lab" that nobody takes seriously. In client delivery, in proposals, in internal operations.
We didn't write a governance framework first. We didn't form a committee. We applied our Formula One precision methodology to a new challenge: systematic experimentation. But we weren't reckless about it. Everything ran against our own internal data, our own processes, our own content. We weren't experimenting with client environments or client data. That was the line, and it was non-negotiable from day one.
What We Discovered in Early 2024
Use Cases That Delivered
We learned which use cases delivered genuine productivity gains and which were novelty. Real work separated the signal from the noise.
Oversight Requirements
We learned where AI needed human oversight and where it could run independently. The gap between "impressive demo" and "production-ready tool" was enormous.
Experiential Learning
AI adoption is experiential. You cannot read your way to competence. The organisations that lead are the ones that started doing the work early.
"We kept it on our own turf. Our data, our processes, our mistakes to make. Client environments were off limits until we knew what we were doing."
Rovo Changed Everything
In July 2024, Atlassian invited us into the Rovo Beta program. We were among the early cohort of partners granted access to the platform before its general release. Design Industries enrolled on 10 July 2024 and activated our beta access on 11 July 2024.
JANUARY 2024
AI4DI Program Launch
Hands-on experimentation begins with ChatGPT accounts for curious team members.
JULY 2024
Rovo Beta Cohort 1
Early access granted. Building agents, testing capabilities, direct feedback to product team.
LATE 2024
AI Fast Start Launches
20-hour productised engagement built from institutional knowledge. Systematic AI deployment.
JULY 2025
Claude & MCP Integration
Atlassian MCP server launches. Multi-AI architectures become operational reality.
For an Atlassian Solution Partner like Design Industries, this was a pivotal moment. Rovo wasn't just another AI chatbot bolted onto existing products. It represented a fundamental shift in how organisations could interact with their Atlassian ecosystem: unified search across every connected tool, conversational AI grounded in your actual business data, and the ability to build custom AI agents without writing code.
Within days of activation, our team was building agents, testing search capabilities, and providing direct feedback to the Rovo product team through a dedicated Slack channel. That beta period, running from July through September 2024, gave us something that cannot be purchased: hands-on experience with the platform before anyone else had it.
Formula One teams know that early track time reveals insights competitors can't access later. The same proved true for Rovo. By the time general availability arrived, we had already navigated the learning curve our clients were about to face.
"The real insights don't come from reading about AI. They come from deploying it in your own environment and seeing what happens."
Building the AI Fast Start
By late 2024, we had accumulated months of practical AI experience across both general-purpose tools like ChatGPT and Claude, and Atlassian-native AI through Rovo. We had a clear picture of what enterprise organisations needed to get started, and more importantly, what they kept getting wrong.
The pattern was consistent. Organisations would switch on Atlassian Intelligence, play with Rovo for a few days, get underwhelming results, and conclude that "AI isn't ready yet." The problem was never the technology. The problem was the foundation.
AI Fast Start: 20-Hour Productised Engagement
Activate Intelligence
Atlassian Intelligence and Rovo deployment across your instance
RAG Foundations
Retrieval Augmented Generation setup for accurate, grounded AI outputs
Data Cleanup
Confluence and Jira cleanup initiated for clean, structured data
Agent SDLC
Software Development Lifecycle for systematic agent production
The AI Fast Start exists because we learned, through our own experience, that the first 20 hours matter more than anything that comes after. Get the setup right and everything accelerates.
Claude and MCP: The Acceleration Point
While we were building out our Atlassian AI capabilities through Rovo, something else was happening in the broader AI landscape that would fundamentally change our trajectory.
Anthropic launched the Model Context Protocol (MCP) in November 2024, an open standard designed to connect AI models to external tools and data sources. Think of it as a universal adapter: instead of building custom integrations for every AI tool, MCP provides a standardised way for any AI model to plug into any data source.
Atlassian was one of the first major enterprise platforms to build an MCP server, launching their official Remote MCP Server in July 2025 with Anthropic's Claude as their first official integration partner. For us, that was a turning point.
❌ Before MCP
- Custom integrations for every AI tool
- Manual data retrieval across platforms
- Context switching between tools
- Limited AI-to-data connectivity
✅ With MCP
- Standardised universal protocol
- Direct AI-to-Atlassian connection
- Seamless multi-tool workflows
- Connected ecosystem architecture
Building the Foundry's Own Foundation
As DI AI Foundry took shape as a research initiative, we hit a familiar problem: our own marketing content didn't reflect what we had become. The website copy, the proposals, the service descriptions were all written before we had a real AI practice. They needed a complete overhaul.
Rather than outsource it or treat it as a side project, we used it as a product discovery exercise for the Foundry itself. We set up a dedicated Confluence space to house everything: service definitions, RAG foundations, style guides, naming conventions, SEO keyword strategies, content audit frameworks. Every piece of the puzzle that a well-structured AI-ready knowledge base needs.
Then we pointed Claude at it and started rewriting.
The Virtuous Cycle
We discovered gaps in our documentation
We documented those gaps systematically
The RAG foundation improved continuously
Claude's outputs became more accurate and consistent
Result: A complete refresh of our marketing content — not AI-generated slop, but precisely engineered content built on clean foundations.
"Every piece of content we rewrote made the next one better. That's what a well-built RAG foundation does."
Why This is Experiential (And Why That Matters for You)
Here is the honest truth about enterprise AI in 2025 and 2026: there is an enormous amount to learn, and it is significantly easier to be shown than to figure it out on your own.
This isn't a reflection on anyone's intelligence or technical capability. It's a function of how rapidly the landscape is evolving. Between Atlassian Intelligence, Rovo agents, Rovo Dev, RAG architectures, MCP integrations, and general-purpose AI tools like Claude, the number of moving parts is substantial. Each one has nuances that only become apparent through hands-on implementation.
We have spent over two years building this institutional knowledge. We've made the mistakes so our clients don't have to. We've tested the edge cases, documented the patterns that work, and built frameworks that turn AI deployment from an open-ended experiment into a systematic, repeatable process.
Formula One Precision Methodology
We've compressed two years of learning into frameworks that deliver results in weeks. That's not marketing speak. That's institutional knowledge your organisation can access immediately instead of building from scratch.

"Two years of lessons learned, compressed into weeks. That's the value of working with someone who's already been through it."
Where DI AI Foundry is Heading
The DI AI Foundry has evolved well beyond its origins as an internal AI experimentation program. Today it operates as Design Industries' dedicated AI research and development initiative, with tangible service outcomes that span the full spectrum of enterprise AI adoption.
AI Fast Start
For Atlassian environments — rapid deployment in weeks
RAG Implementation
Knowledge base optimisation for accurate AI outputs
Custom Rovo Agents
Bespoke agent development and deployment
AI Governance
Frameworks for enterprise compliance requirements
Claude & MCP Integration
Multi-AI architectures for connected ecosystems
N8N Workflow Orchestration
Complex automation scenarios and multi-tool workflows
We're also seeing a significant shift in what clients are asking for. The early conversations in 2024 were about "should we use AI?" The conversations in 2026 are about "how do we scale AI across the organisation?" That shift, from exploration to execution, is exactly where DI AI Foundry operates best.
The Compounding Advantage
Every month of hands-on AI work builds on the months before it. The organisations that started their AI journey in early 2024 are now operating with two years of institutional knowledge, refined processes, and proven patterns. The organisations starting now have a gap to close.
That gap isn't insurmountable. But it does mean that having a partner who has already walked the path, who has already made the mistakes and built the solutions, is the difference between a six-month learning curve and a six-week deployment.
That's what DI AI Foundry was built to deliver: systematic excellence in AI adoption, backed by two years of hands-on experience across Atlassian native AI, general-purpose AI platforms, and multi-AI enterprise architectures.
"The best time to start was January 2024. The second best time is right now."
Trusted by Leading Australian Enterprises
As an Atlassian Gold Solution Partner, Design Industries serves enterprise and government clients including ANZ, Australia Post, Australian Bureau of Statistics, Afterpay, Victoria Police, and Berry Street with our Formula One precision methodology.
- ✓ Atlassian Gold Solution Partner
- ✓ Melbourne-based with 25-person specialist team
- ✓ Serving enterprise and government clients across Australia
Ready to Break the Busy Trap?
Our Digital Factory provides foundation package, a systematic entry point to process optimisation for Australian enterprises — 50 hours of strategic improvements delivered in 3–4 weeks, with measurable results.
.png?width=211&height=52&name=Logos%20(1).png)

