I joined a few thousand designers, founders and AI nerds for Canva’s first AI Vision event at the Hordern Pavilion in Sydney (12 November 2025), and it honestly felt like a live snapshot of where human-centred AI is right now: exciting, messy, a bit scary, and full of possibility if we don’t forget who’s actually supposed to be in charge (humans, that is).
From the opening moments, hosts CJ and Cameron Adams set the tone that this wasn’t just going to be a product parade. CJ framed it simply: Australia already has a track record of “building genuinely useful software companies that are ready to take on the world,” and AI is just the next chapter of that story. No matter where you are in your AI journey, she said, “we hope you leave tonight with something tangible.” Cameron backed that up with an invitation to “really challenge assumptions, boldly re-imagine the possible, and explore how we can use AI to not just be more efficient, but actually truly visionary.” It was less hype, more human curiosity.

The environment helped. Registration flowed without drama. The foyer felt more like a festival night than a stiff conference: food stations for pasta, dumplings and tacos, a speed-prompting activation, Leonardo.ai and other partners showing off what’s possible, and Canva staff everywhere making people feel welcome. It had the vibe of a tech event that remembered people come first, community-centered.
The first talk that really hit me was from Dr Aengus Tran, founder and CEO of Harrison.ai. He didn’t start with models or metrics. He started with Diane, a 69-year-old woman in the UK whose chest X-ray had been cleared as normal by her care team. Harrison’s AI, switched on that very day, quietly flagged something subtle that everyone else had missed: stage 1 lung cancer. When the BBC asked how she felt about AI, she said, “I never really understand AI, but I think it might have just saved my life.” Aengus then walked us through the brutal maths of early diagnosis and survival rates and summed it up simply: “There are many problems that AI will solve. This one is worth solving.”
What stuck with me is how he framed the whole thing as a capacity problem, not a replacement fantasy. It takes more than a decade to train a specialist radiologist. Australia is already short around a thousand. Even if we started today, we’d never catch up. Harrison’s vision is to act as an autoscaler for healthcare, running “35% of chest X-rays in England” already, supporting every public emergency hospital in Hong Kong, and tools used by half of all radiologists in Australia. AI isn’t magically becoming a doctor; it’s extending doctors so they can see more, earlier, and with fewer critical misses.

From healthcare, the night zoomed out to culture with the first panel, hosted by journalist and Triple J Hack political reporter Shalailah Medhora. The question on the table: what is Australia’s role in AI, and does our “fair go” mentality help or hinder?
Marie-Céline Merret Wiström, Head of Creative Technology and AI at Made This, argued that “Australia’s strongest currency is creativity.” She’s lived on four continents, but sees something uniquely powerful in our mix of scepticism and imagination. The trick, she said, is turning scepticism into curiosity instead of freeze mode. Do we want to lead with “proof” and wait on the sidelines, or “lead with creativity and curiosity” and learn by doing?
Sherif Mansour, Head of AI at Atlassian, leaned into the trust side of the conversation. Aussies have a strong BS detector, so transparency matters. If teams are going to use AI in serious business contexts, they need to know how it works, be able to inspect it, and see where the answers come from. He also made a point that really resonated with me as a marketer: if everyone prompts the same way, we all get the same output. The only way out of “AI slop” is taste. Teams have to inject their own tone, values and judgement into the process or everything flattens into generic.
JJ Fiasson, now Lead of Generative AI at Canva and founder of Leonardo.ai, picked up that thread and reframed AI as a creative sparring partner. “If you want authenticity, you need to be part of the process,” he said. If you just fire off a lazy prompt and accept the first answer, you get what you deserve. The interesting work comes when you bring real context, iterate, challenge the model and “finesse the output rather than just taking it as black-box automation.”
Then there was Pier Luigi Culazzo, Global Chief Data and AI Officer at Macquarie Group, who dropped the line that probably sums up where we are as a society: “You can’t opt out.” There are risks in adopting AI and risks in not adopting it, but pretending you can sit this one out is an illusion. He talked about capability gaps compounding over time and reminded everyone that ethics isn’t an abstract layer you slap on at the end; it’s grounded in real use cases and choices.
The conversation kept circling back to mindset. Sherif pushed for “tinkering” as a cultural default: have a go, learn by doing. Pier reframed AI as a collaborator where humans still own values. JJ pointed to the Australian “relentless pragmatism” that can keep us grounded. Shalailah voiced a concern many creatives share: AI still has a perception problem. Is it here to help, or to sanitise and replace what people love about their craft? Marie-Céline’s answer was that there is already a new wave of “AI artists” who are deeply technical and deeply creative, and the real risk is not that tools exist, but that we don’t upskill fast enough to keep craft at the centre.
If the panel was about mindset, Dr Sandra Peter from the University of Sydney was about the decade we’re walking into. She described the 2020s as “the most disorienting time of your careers, but also likely to be the most impactful.” Her research on The Skills Horizon has moved from “you can’t fake the language of tech” to a much more practical challenge: you actually have to make AI work in your team, business or startup.
Her core tension was sharp and simple: “You will face this fundamental tension between efficiency and expertise.” AI obviously helps with speed and productivity, but if we thoughtlessly offload everything, we end up with “vibers” sending long, shallow outputs and organisations drowning in “work slop.” Even more worrying, if entry-level tasks vanish, where do juniors learn? If AI takes all the grunt work, and all the craft work is done by seniors amplified by AI, what happens to the pipeline of expertise in the middle? Download the 2026 Skills Horizon report.
She shared perspectives from global leaders like Rafael Sangiovanni in creative agencies, Rose Herceg at WPP, Robert Thomson from News Corp and Meredith Whittaker from Signal, all circling the same idea: we need to find the line between what we automate and what “should still be imagined by us humans.” We’re going to get things wrong, she reminded us, but the point is to experiment, admit when we were wrong, pivot and “keep learning on the tightrope” rather than pretending we can predict everything in advance.

Then came Craig Scroggie, CEO and Managing Director of NEXTDC, who gave everyone a crash course in the fourth industrial revolution. He grounded the AI moment in a longer story, from Nokia bricks and BlackBerry keyboards to the iPhone and the app economy, from Moore’s Law to what he called “Huang’s Law” as we shift from CPU to GPU-driven accelerated computing.
He reminded us that we are already living in the fourth industrial revolution: steam, electricity, the internet and now AI. Data, compute and connectivity are all compounding at insane rates. “Technology built by technology will drive an exponential rate of change,” he said, quoting Ray Kurzweil. The bit that really hit me was his line that “building the infrastructure of the AI era is the single most significant change in the history of technology.” When you hear it from someone whose whole job is building data centres that act as “AI factories”, it lands.
After a break to reset and refuel, we came back into a more practical, startup-focused conversation about what it really means to be an “AI-first organisation.” Journalist, broadcaster and author Antoinette Lattouf moderated with what she called a proven talent for interrupting (“and I will use physical force if necessary, but they have consented”), which kept the energy high.

Amina “Arms” Rosenberg from Minotaur Capital described her fund as “human-first, but AI-native.” They’ve rebuilt their investment workflow around large language models, from idea generation to live monitoring of portfolio companies. Instead of waiting weeks for a junior analyst to tell them a stock isn’t that interesting, they built Taureant, their internal AI tool that can generate a “quick view” in minutes. Her favourite line of the night might be the one we’ll see on slides for years: “Pessimists sound smart, but optimists make money.”
Steve Hind from Lorikeet pushed back on a shallow definition of AI-first. It’s not about taking a human task and just “doing it with AI instead.” It’s about giving humans superpowers. He shared how Lorikeet uses AI to automatically generate deep weekly updates for clients by pulling from tickets, product data and conversations. In a pre-AI world those updates wouldn’t exist at all; there simply wouldn’t be enough hours. For him, the real mistake wasn’t over-ambition, it was being under-ambitious about what AI could do.
Jacky Koh, co-founder and co-CEO of Relevance AI, offered a simple two-by-two model that I’m already stealing: co-pilot vs autopilot, and current vs aspirational tasks. The sweet spot over time, he said, is “autopilot plus aspirational tasks” – the important work you’ve never had time for, finally getting done by agents you trust. But he also admitted that early on they underestimated how crucial domain experts are. AI agents are “moulded after the person who’s building them,” so you want your best operators shaping prompts and guardrails, not leaving it to generic internet data. He closed on a note that fit the whole night: “Do stay crazy optimistic, because otherwise you’re going to keep banging your head on it – and eventually it will work.”
Tom Humphrey from Blackbird Ventures looked at this from an investor’s standpoint. Blackbird now has 70 AI-core companies in its portfolio and even an internal Product, Data and AI team making the firm itself more AI-enabled. His main advice was “clean-slate thinking.” Don’t just jam AI into existing workflows; ask, “If I was to start this process today, from scratch, how would I design it with AI in mind?” It’s a subtle but powerful shift: AI isn’t a bolt-on, it’s part of the architecture.
Then we went back into Canva’s home turf: creativity. Stef Corazza, Canva’s Head of AI Research, took the stage in a very non-AI Italian jacket and called this moment “a new creative renaissance, where basically the sky’s the limit.” He explained Canva’s idea of a Creative Operating System with Canva AI sitting between the visual suite and the platform, acting as the connective tissue that makes AI feel seamless instead of bolted on.
Until recently, he pointed out, most image models were like slot machines. You’d prompt, spin, hope for something decent, and if you wanted to change anything, you had to spin again. Even newer “omni” models still tended to give you a fixed result. Canva’s new Design Model flips that by generating editable designs layered directly into the editor, so AI can get you to 80 per cent and then you can actually push it to 100 per cent in real work.
Stef talked about three phases of AI adoption: the early Discord-and-terminal experiments, the in-tool integrations, and then the point where AI is fully woven into collaborative workflows and almost disappears from view. That’s where the real adoption curve takes off. He gave a glimpse of that future in features like Ask Canva, where AI becomes a persistent collaborator that understands your brand, your documents and your context. His guiding principle was simple and reassuring: “We are building AI around human creativity.” AI, he said, should act less like a creative director and more like a dishwasher, quietly handling the tedious stuff so humans have more time for ideas.
The final big conversation of the night brought things back to the global stage. Canva’s Cameron Adams sat down with Oliver Jay, Managing Director International at OpenAI, for what felt like a very honest fireside chat about adoption, access and the road to AGI.

OJ joked about the chairs and valuations, then answered the obvious question of why he joined OpenAI by saying, “What a time to be alive.” He talked about the difference between past “product-led growth” waves and the AI wave, where adoption has been “everywhere all at once” from day one. ChatGPT didn’t roll out to the Bay Area first and slowly creep outwards; it went global overnight, and OpenAI had to become a global company just as fast.
What really stayed with me was his focus on access and the digital divide. OpenAI’s mission is to ensure “the benefits of AGI are spread to everyone in humanity,” and that’s not going to happen by default. He described the OpenAI for Countries program, partnerships with governments on infrastructure and skills, and a big bet on voice as a way to include billions of people whose written language has never fit the QWERTY keyboard. “You have to get your hands dirty,” he said about AI adoption. Not once, but continuously, because the tools keep changing.
He shared an example from India where farmers can take photos of their crops and get tailored advice, boosting incomes by double-digit percentages, and imagined a near future where a rural farmer could design in their own language using Canva and ChatGPT by voice alone. That’s where the whole night connected for me: AI as an extension of human capability across health, creativity, education and economic opportunity, but only if we stay very intentional about ethics, safety, transparency and access as capabilities grow.
We closed with a live ChatGPT “Chatty” demo on stage. When CJ asked for three headline takeaways, Chatty replied that tonight showed that “AI is fundamentally a human practice,” that “curiosity beats caution every time,” and that the future of creativity is about making AI “so seamless that it becomes invisible.” Asked to wrap AI Vision in four words, it chose: “human, curious, creative, visionary.” It’s slightly surreal to watch an AI summarise an event about AI, but for what it’s worth, I agreed with it.
If there was an unofficial theme running across Aengus, Marie-Céline, Sherif, JJ, Pier, Shalailah, Sandra, Craig, Antoinette, Amina, Steve, Jacky, Tom, Stef, Oliver, CJ, and Cameron it was that AI is an extension of human creativity, not a substitute. It can extend our reach, sharpen our work and unlock things we’ve never had the capacity to do before. But it does not grow up with values. It does not bring taste, memory, lived experience or care unless we deliberately put those things into the loop.

We’re already deep into the fourth industrial revolution. AGI is no longer just science fiction on a slide; it’s an active research direction. That makes AI safety, ethics, governance and human direction non-negotiable. The time to set the norms is now, before systems become so capable that “we’ll fix it later” stops being an option.
For me, AI Vision 2025 was a reminder to treat AI as a craft, not just a tool. Something to practise, interrogate and shape over time. Something that amplifies our best work when we lead with curiosity, responsibility and optimism, and something that can quietly undermine expertise and trust if we hand it the keys and look away.
I’m choosing the first path.

