AI is reshaping software engineering, but the changes are more complex than a simple narrative of disruption or enablement. Drawing on interviews with leading CTOs and data from LinkedIn and the Tech Council of Australia, the report presents a nuanced view of AI’s current and future impact on engineering talent, culture, and capability.
Key Themes:
1. The Current Landscape
- Australia's AI workforce grew from 800 in 2014 to over 33,000 in 2023.
- Despite concerns, AI has not yet reduced engineering jobs—most titles still show positive growth.
- Roles specific to AI (e.g., Generative AI Engineers) are increasing rapidly, while more generalist roles (e.g., Full Stack Engineers) show stagnation or decline.
2. CTO Sentiment: Cautious Optimism
- Most CTOs agree that AI is a tool, not a replacement—currently helpful for scaffolding code, but not yet reliable in production.
- AI's speed of advancement is underestimated, posing a risk of obsolescence for companies and individuals who fail to adapt.
3. The ‘70% Problem’
- AI often delivers ~70% accurate code, but struggles with the remaining 30%—contextual judgement, debugging, and domain-specific knowledge remain human strengths.
- There’s concern that junior engineers may over-rely on AI and fail to build foundational skills.
4. The Knowledge Challenge
- As AI increasingly abstracts away complexity, domain-specific knowledge is at risk of erosion.
- Historical economic theory (e.g., Hayek’s Knowledge Problem) is referenced to show that AI, like central planning, may mismanage distributed knowledge if over-trusted.
5. Conformity & Cognitive Offloading
- Engineers may blindly follow AI-generated solutions, especially junior staff, leading to loss of critical thinking and creativity.
- Experiments like the Asch Conformity Test are referenced to warn against groupthink in engineering decisions guided by AI.
6. Recruitment & Training Implications
- CTOs suggest hiring for creative, commercially-minded engineers who understand the business—not just technical skills.
- AI enables broader participation (e.g., self-taught “normies”), but companies must still invest in foundational training to maintain quality and innovation.
7. Risks of AI-Induced Stagnation
- Recursive model training could lead to “model collapse”, where generative systems regress by learning only from prior AI outputs.
- If not managed properly, AI may encode biases, limit heterodox thinking, and undermine innovation.
Recommendations for CTOs & Tech Leaders:
- Do not abandon training juniors. Basic software skills remain critical, especially for debugging AI-generated outputs.
- Use AI as augmentation, not substitution. Treat AI as a ‘copilot’—valuable, but still needing human oversight.
- Hire for creativity and business awareness. The most valuable engineers will be those who combine domain expertise with strategic thinking.
- Develop AI policies. Most teams lack clarity on responsible AI usage—urgent governance is needed.
- Prepare for long-term effects. Avoid “presentism”—think 10–20 years ahead to anticipate structural shifts in team capability and knowledge distribution.
AI is transforming software engineering, but its adoption must be strategic and human-centric. The next era of technology will belong to organisations that embrace AI while continuing to nurture talent, preserve institutional knowledge, and think independently.