It's Time To Step Up
- Pete Cohen

- Feb 27
- 6 min read

As I approach my 50s, I have two voices in my head that are getting louder and louder.
One voice reminds me that - as I look around at my peers, colleagues and clients - we are now the adults in the room. There's no one else to look to who is going to make the important decisions that will shape our society. As emerging elders, we have a critical role to play.
The other voice, which tends to manifest as a literal vocal refrain in many conversations, shouts that “this is such a weird time.” It is inevitably triggered by the endless stream of examples of how unevenly distributed our collective understanding of AI is, which I believe is one of the most significant forces the human race has encountered.
And it's the intersection of these two things that is starting to keep me up at night.
My core concern is that most people in decision-making roles don't yet have enough awareness to get us through this period safely.
Through the course of my work I get to spend time with some of the most brilliant and influential people in the country. Most of that work is at the intersection of strategy, innovation and technology, so of course AI is always a central topic. However, what I observe in those workshops and boardrooms is that the general level of awareness and discourse remains at quite a shallow level when it comes to AI. I don’t mean this as a criticism. AI is evolving at breakneck speed, and leaders already have a fully stacked plate of concerns and responsibilities, and learning about AI hasn’t been a priority. But we are approaching an inflection point. In order to navigate both the opportunities and challenges that are unfolding before our eyes, we collectively need much more than a surface understanding.
The momentum of AI over the past few years has been unavoidable. All of our news feeds are saturated with the headlines. But unless you are hands-on with the technology beyond using chatbots such as Copilot or ChatGPT, you might not be aware that something meaningfully shifted at the end of 2025. People who are following AI closely are starting to freak out - to put it plainly and bluntly.
Matt Shumer wrote an article called Something Big Is Happening, which went viral at the beginning of February:

The premise of the article is that software development is the canary in the coal mine when it comes to the impact on jobs, and that it is no longer hypothetical. It is actually happening right now.
What we have witnessed with the release of the latest generation of frontier AI models (i.e. Opus 4.6, Codex 5.3, Gemini 3.1) is a step change in capabilities, especially in software development. Whereas until late 2025, an AI model may have been able to work autonomously for a few minutes and produce some reasonable code, they can now run for hours and output fully working, comprehensive software systems. Of course AI-generated code isn’t perfect and there are some nuances to consider, but the key point is still the same - the activity of software development has basically been solved by AI already. It is significant because software already ate the world, and now it can build itself. I can’t think of any clearer preconditions for an exponential curve.
Dario Amodei (CEO of Anthropic) agrees. Not only is he leading the company at the forefront, but he invests significant time and energy writing about the big societal considerations that will stem from AI advancement. I consider him one of the most informed and holistic thinkers when it comes to modern AI. In a recent podcast interview, he opens with the statement “the most surprising thing is the lack of public recognition of how close we are to the end of the exponential.”
Humans are notoriously terrible at intuitively understanding exponential growth - we tend to think linearly by default. But when something can extend and improve itself, with no barriers to slow it down, then the rate and magnitude of change becomes immense. When what has just happened to software is applied to other fields, we go from AI being able to solve simple problems to outperforming humans within days or weeks, not months or years.
As overwhelming as the potential implications are, the question becomes... what do we do about it?
I don’t have any silver bullets to offer. The only thing that I implore anyone who will listen to do is spend time peeling back the layers and rounding out your understanding, so that we can have truly meaningful discussions about how to navigate the fast-approaching decisions impacting how to reshape business models and workforces.
The product and technology layer is the most visible, but it is almost a distraction. It evolves significantly on at least a monthly basis, if not weekly or daily. It is impossible to keep track of everything, even if it is important to understand how the trend line is evolving.
The less visible and emerging layers are where we need to be having more meaningful conversations. This is where the wicked problems lurk - such as how we are going to deal with a rapidly and radically reshaped workforce and the societal impacts of that, how will we manage the huge demands it will put on our energy infrastructure, and how we will navigate the geopolitical tensions and sovereign risks given that this house of cards is currently dependent upon just a few American companies.
Timing is a key point. Each of us needs to be able to arrive at an informed opinion about when these realities will be upon us, so that we can make the best proactive and reactive decisions, especially in the face of commercial pressures. Geoffrey Hinton, Nobel Prize winner and godfather of AI who has notoriously been ringing the alarm bell, often quotes a 10-20 year timeframe for the really big shifts. Dario Amodei tends to talk in a much more immediate 1-5 year horizon. Of course, none of us knows. But what is largely certain is that for most of us, we will be dealing with a profound shift due to AI within our working lives, and while we are in a position of significant responsibility. There is a high likelihood that this will start to unfold while you are in your current role with your current employer. This mustn't be a can that we continually kick down the road - we need to start preparing ourselves to make significant decisions.
Some practical suggestions to Australian business leaders:
Read Dario Amodei’s essays. Not because he is some oracle or guru, but because he takes the time to unpack the holistic picture and does his best to provide a balanced perspective. They are long reads - you’ll need to invest a couple of hours. But they go into the depth and nuance that we need in order to have the conversations we need to have. Start with Machines of Loving Grace (October 2024, a more optimistic view) and then read The Adolescence of Technology (January 2026, that delves into the risks).
Ensure that there are diverse perspectives in the room. The topic of AI often gets lumped into the “Information Technology (IT)” bucket, but in reality, it includes economics, business models, people development, government policy etc. While many technical leaders (CIOs, engineers etc) do have a broad and informed perspective, don’t just rely on the folks from the IT department to take care of AI. We all need to play our part, and not just at a superficial level.
Get hands on. While it is actually feasible for anyone reading this to have the experience of vibe coding a software application within 30 minutes, maybe that isn’t realistic. But I think it is unwise to limit one’s exposure to just using a ChatGPT or reading articles. Take the opportunity to attend showcases where teams in your organisation are demonstrating how they use AI tools, or reach out to a colleague or a family member and ask them to show you first-hand the power of the current tools. It’s all academic until you experience and truly appreciate what is already possible.
Make informed buying decisions. One of the only real levers available to any of us, as individuals or as decision makers in organisations, is where we choose to spend our money. There is a complex interplay between commercial, political, and policy forces in AI. Pay attention to how providers approach safety, governance, and use cases - these choices have downstream societal effects. If the companies who hold the line on safety and ethics can’t remain commercially viable, then that leaves us in a precarious place.
As tiresome and overwhelming as the never-ending AI narrative can sometimes feel, the story is really just beginning. My genuine hope is that we can collectively step up and educate ourselves on the nature of what we are dealing with so we can steward this very significant moment in history in a safe and humane way.



