Artificial intelligence is rapidly changing how projects are planned, monitored, and delivered. Tools powered by machine learning can now forecast delays, model supply disruptions, and flag budget risks long before they appear on traditional dashboards. For many organizations, these systems promise a future where uncertainty is managed by algorithms rather than people.
A growing body of research suggests that this expectation may be misplaced.
Recent studies by project management researcher Samuel Yaw Larbi examine how artificial intelligence is being integrated into project environments, particularly in complex systems such as supply chains, infrastructure delivery, and large organizational transformations. His work does not question the usefulness of AI. Instead, it asks a more difficult question: what happens when decision‑making authority quietly shifts from humans to machines?
Across multiple peer‑reviewed publications, Larbi’s research highlights a consistent pattern. AI tools are increasingly used to recommend schedules, rank risks, and optimize resources, yet the assumptions behind those recommendations are often poorly understood by the teams using them.
In practice, AI systems learn from historical data. When that data reflects outdated conditions, incomplete records, or embedded bias, the system can reproduce those weaknesses at scale. According to Larbi’s analysis, the danger is not technical failure but misplaced trust.
Rather than treating AI as a neutral decision‑maker, his work frames it as a powerful but limited assistant. Algorithms can identify patterns faster than humans, but they cannot interpret social consequences, ethical trade‑offs, or political realities that surround many projects.
One of the central findings in his research concerns accountability. When AI‑generated recommendations influence major project decisions, responsibility can become blurred. Project managers may feel pressured to follow algorithmic outputs even when their professional judgment raises concerns.
Larbi argues that this shift creates new risks, especially in public and high‑impact projects where decisions affect communities, workers, or public funds.
“AI systems are very good at telling you what is likely to happen based on the past,”
“What they cannot do is take responsibility for the outcome. Accountability does not belong to an algorithm. It belongs to the people who choose whether or not to act on its advice.”
— Mr. Samuel Yaw Larbi
His research emphasizes the importance of what he describes as human‑in‑the‑loop project management. In this approach, AI provides forecasts and scenarios, but humans retain authority over final decisions. Rather than replacing professional judgment, technology strengthens it.
Transparency is another recurring theme in Larbi’s work. He stresses that project teams must be able to explain how an AI system reached a conclusion, what data it relied on, and where uncertainty exists. Without that clarity, trust erodes quickly among stakeholders.
As AI becomes more embedded in project tools, Larbi’s findings suggest that the role of the project manager is evolving rather than disappearing. Traditional skills such as scheduling and reporting remain important, but new competencies are emerging. Project leaders must now understand data quality, recognize algorithmic limitations, and communicate AI‑supported decisions clearly to non‑technical audiences.
This shift places renewed emphasis on human judgment, not less.
“The future of project management is not about humans competing with machines,”
“It is about people making better decisions because intelligent tools are supporting their judgment, not replacing it.”
— Samuel Yaw Larbi
For organizations eager to automate project decisions, Mr. Larbi’s works offer a cautionary message. AI can improve efficiency and foresight, but only when guided by professionals who understand both its strengths and its limits.
In complex projects, judgment, responsibility, and ethical awareness remain human work. AI may analyze the data, but it is still people who must decide what to do with it.
Comments are closed.