Blog
Five Lessons from the CTO Craft Leadership Masterclass on Engineering in 2028
Do software engineers face the same fate as the artisan baker? Is AI speeding up the SDLC or just moving the bottleneck elsewhere? A CTO, a Director of Engineering, a CPTO and a VP of Product discussed the realities of AI orchestration, its impact on software engineering, and how leaders can prepare. Read the highlights from the discussion held in front of a CTO Craft audience.
Fifty people watched the panel discussion in London in May 2026 on the theme of Engineering 2028: A Leadership Masterclass. Chaired by Iain Bishop, the panel included:
- Lucinda Faucher, VP of Product at Valve (platform for sales, marketing and distribution for flexible workspaces)
- Alttaf Hussain, Director of Engineering & AI Innovation at Yoti (platform for privacy-focused identity and age solutions)
- Vishal Manani, CPTO at Eckoh (platform securing sensitive data in contact centres)
- Chris Parsons, CTO and independent consultant, writer & AI strategist
Where to start with AI orchestration
A theme that continued throughout the session through to the final questions was how to get teams on board with AI orchestration. Vishal Manani gave the example of his software engineering colleague, describing his role as similar to an artisan baker’s. The colleague was loathe to become a machine operator making white sliced bread. While this example is of someone who respects their trade and the outcomes from his work, the panel knew of others who simply enjoyed coding for the sake of coding.
Leaders need to understand the motivations of individuals in their teams to bring them on the AI journey. As stated in our Engineering 2028 report, the demand for software is growing with the rise in productivity gains from AI. However, no one shied away from the fact that some people will lose their jobs. Those who are using AI now have a greater advantage and will succeed longer term.

Engineers’ roles are changing rapidly. Job descriptions still haven’t changed but engineers must accept that they will spend most of their time refining agents rather than writing features directly. Those who refuse this kind of role will eventually need to look to other professions, as traditional engineering roles disappear.
This doesn’t mean giving up. Leaders can have a huge impact on adoption rates. Encourage those who have embraced AI early on to become AI champions in the team. Given them the licences they need to try out what they want to do. Then get them to share their successes. This kind of positive reinforcement is a tried and tested method of increasing technology adoption, and it works with AI too.
Speed vs quality
The tension between speed vs quality with AI is widely discussed. The software development lifecycle (SDLC) is changing, with bottlenecks moving from coding to reviewing.
“We are in a point in history where you can go from vibe coding to production at blistering speed.” – Alttaf Hussain, Director of Engineering, Yoti
Especially in private-equity backed companies, AI is being jumped on to pick up the pace of software creation. However, the panel was concerned that what comes before and after code generation has not kept pace. Requirements, review and governance are still reliant on humans. With AI possibilities of creating more code than ever, organisations need to consider that there is still a barrier to achieving results at speed, unless we want sub-standard products.
Human oversight will still be critical for the foreseeable future, as AI’s limits with judgement are clear. One example shared by Vishal Manani was of an AI agent writing brittle test and then changing the application’s code to make the test pass. This caused an outage. You can speed up one area of software development, but it will result in other areas slowing down.
The human moat
When asked about what stays with us, the humans, the panel agreed on one area above all: accountability.
“When things go wrong, who owns it, monitors it, reviews it?” – Lucinda Faucher, VP of Product, Valve
Chris Parsons referenced Tristan Harris’ documentary as an extreme example of where lack of accountability can lead to with AI. If companies argue that they are not responsible for someone’s “AI psychosis”, despite creating the AI caused it, who takes responsibility for disastrous consequences?
Traditionally, the person who writes the code takes responsibility for its accuracy. With AI, the lines are blurred. It can build fast, but it might build entirely the wrong thing. Again: the SDLC bottleneck is simply moved – not eliminated. AI needs regulatory compliance and security oversight built in from the start, with a “human in the loop” throughout the SDLC.
Measuring success
According to Lucinda Faucher, nothing has really changed when it comes to measuring success. Metrics like revenue generation and customer satisfaction is what any business and its stakeholders will care about. Ultimately, what we build needs to solve our customers’ problems. Otherwise, what’s the point?

A question came in from the audience around token usage being used as a measure of success by Meta, and whether the panel saw that as a good metric. Chris Parsons disagreed – proxy metrics may be useful to measure steps to getting to where you need, but technical metrics don’t have the same meaning any more. Bottleneck identification is more critical to address than lines of code or token counting.
Five top tips to implement AI orchestration
The panel’s recommendations around AI orchestration were relevant to any organisation. Here are the five top tips we summarised from the event:
- Start with low-risk, controlled workflows in non-safety-critical areas, as AI is still not ready to review AI-generated code without humans in the loop
- Build evaluation frameworks before scaling AI adoption, as AI output is non-deterministic, and decide “what good looks like” before implementation
- Create an AI champion programme, led by your early adopters, to bring more of your people on the AI journey
- Focus on your requirements quality and formalising human checkpoints to speed up the SDLC
- Establish clear governance and accountability structures with your leaders and stakeholders from the start, as you can expect things to go wrong sometimes
Following the event, several people felt that the conversation was just getting started. How do we persuade our leaders and stakeholders that we may need to slow down to go faster? How do we get consensus on “what good looks like” before implementation? If you’re a technology leader who would like to discuss practical next steps around leading human + AI teams responsibly, follow our Linkedin and to hear about our events before anyone else.