At the IEEE conference last August, Harry Jones gave a provocative presentation about Fantasy Projects. He described these as visionary, exploratory endeavors that push the boundaries of what’s possible, i.e., the kind that expand humanity’s collective knowledge, even when their outcomes are uncertain. NASA’s Apollo missions, the Hubble Space Telescope, and the Constellation program were his prime examples. These projects were ambitious to the point of audacity, demanding budgets, schedules, and technical plans that treated the impossible as merely difficult.
Today, many artificial intelligence (AI) projects claim ambitious aims but lack the grounding and unified purpose of past scientific endeavors. They balance uneasily between aspiration and fantasy, promising societal benefit while serving commercial interests, yet often fall short of meaningful accountability.
AI’s Fantasy Framework

In our research Simultaneous Pursuit of Accountability for Regulatory Compliance, Financial Benefits, and Societal Impacts in Artificial Intelligence Projects, we identify how AI projects attempt to be everything at once (ethical, profitable, and socially responsible) yet rarely achieve meaningful balance. We found that accountability within AI development shifts dramatically depending on who’s involved and the scope of the system. Regulatory compliance, financial goals, and societal benefits coexist, but only in theory. In practice, most project roles hold less than half the responsibilities they should.
AI teams plan as though costs, schedules, and outcomes are deterministic and not aspirational. They treat the creation of intelligent systems believed capable of perception, experience, and reasoning as if it were a product release rather than a philosophical experiment. This “project planning delusion” mirrors what Jones described: an assumption that technical progress follows linear project management principles, when in fact it’s driven by leaps of imagination, bursts of discovery, and sometimes pure luck.
The Mirage of Accountability
The findings show that while AI projects claim to value fairness, transparency, and sustainability, few assign clear responsibility for these goals. There is a deep accountability gap.
In fantasy fashion, teams imagine they can simultaneously maximize profits, protect privacy, and minimize environmental impact. Yet our research reveals that project owners prioritize financial benefits over ethical practices and energy costs. Developers, on the other hand, often assume someone else, or anyone else, will ensure moral compliance.
This diffusion of accountability is the AI industry’s equivalent of believing the rocket will steer itself.
When Vision Outpaces Reality
The Apollo program’s fantasy succeeded because its delusion was disciplined. Every phase was backed by rigorous engineering, independent verification, and national commitment. The current wave of AI innovation lacks that structure. Instead, it thrives on hype cycles, speculative investment, and the illusion that “responsible AI” can be achieved through a few lines of policy language.
Regulatory compliance, financial benefits, and societal impacts are not mutually exclusive project goals and coexist, but coexistence isn’t harmony. Like NASA’s moonshot projects, AI needs bold imagination. But unlike NASA, it lacks an agreed moral compass or long-term mission.
Escaping the Delusion
AI’s fantasy phase could still become its enlightenment era if organizations stop treating accountability as decoration and start embedding it in their technical DNA. That means:
- Defining who is actually responsible for ethical outcomes.
- Recognizing that sustainability and fairness are not marketing claims but engineering constraints.
- Accepting that some AI goals may simply be unachievable and that this, too, is part of honest science.
In the end, fantasy projects like Apollo dared to dream but never forgot gravity. AI, if it wishes to mature, must learn the same lesson. Otherwise, its story will be one of delusion, not discovery.
Our research as released as open access on October 23, 2025 You can view it here.