Select Page

PIMA Bulletin article January 2026 by Cameron Richards

What if the naïve enthusiasts of AI are wrong and it’s really ‘just a technological tool’? The implications of an ‘optimal lifelong learning’ perspective

OVERVIEW :

Many people worry about the naively enthusiastic projections of an ‘AI revolution’ – and how this might further demoralize global education and society. Others see valid uses of AI as another ‘technological tool’. Needed is the framework of a balanced perspective which might provide an ‘antidote’ to the threats of future misuse.

Keywords :

AI as technological tool; future knowledge; human lifecycle learning; optimal lifelong learning from experience, reflection and ‘inner wisdom’

Many people are rightly concerned about the potential misuse of technology—especially in an increasingly uncertain and crisis-laden world. A large portion of this anxiety focuses on the growing number of naively enthusiastic claims about ‘new AI’ (i.e. generative AI programs, applications, and related technologies including ‘large language’ models) and how such developments might accentuate the current global demoralization of education and society. These tools are often advertised as offering shortcuts to innovation, wealth, and ‘productive outputs’. For instance, some AI programs will produce artistic imagery for quick profit. And now some are even marketed as capable of writing a PhD dissertation for you ‘in a week’. Yet others recognize both the legitimate uses and associated challenges of some uses of AI. In many ways, these technologies—once simply modes of ‘machine learning’—are just another set of tools (ultimately an extension of the human mind-body like primitive clubs or axes, the printing press and the modern steam engine).

What is needed, however, is a broader framework that can help balance and correct the extensive list of threats and challenges linked to the possible future misuse of AI. This short piece will reflect on some relevant implications of the idea posed in our related recent book that a human lifecycle model of learning is a possible antidote to such fears [1].

We begin with the recognition that, despite the frequent ‘personification’ of AI as an impending menace or an inevitable agent of human downfall (e.g. some are now projecting a resulting ‘knowledge collapse’ because of AI’s dependency on ‘selective sources’) (Peterson, 2025) — generative AI remains essentially a technological tool (in some ways the ultimate technological tool). The concept of artificial intelligence partly stems from Alan Turing’s influential ambiguity about the term intelligence when also applied to ‘machine learning’: a machine could be considered intelligent (and able to ‘think’), he suggested, if humans were not able to tell the difference between the words or actions of a human and computer.

Many contemporary generative AI applications—from digital assistants to driverless cars to on-demand image generators (ready for immediate commercialisation) —now appear to meet this standard. But they do so only when evaluated from a superficial perspective by individuals apparently unaware of the extraordinary self-organising (and self-directed learning) capacities of the human mind-body organism—including the unique ability in the first place to create the algorithms that make the functions of AI possible. However, those who possess a deeper understanding of human learning and knowledge-making (including the innate ingenuity accessible to all people in principle) – and who can ‘dialogically engage’ such systems – will ultimately be able to tell the difference one way or another. That is why an effective viva defense stage will surely caution those wanting to use AI to help them ‘write a PhD in a week’ (etc.) [2].

Recognizing this difference also requires reclaiming terms like ‘deep learning,’ which have been appropriated and redefined by computer scientists (and others of similar ilk) (e.g. www.educative.io/courses/ai-fundamentals/introduction-to-artificial-intelligence). This is similar to how notions of ‘knowledge’ and ‘wisdom’ were likewise reduced to mere data (not human experience) within the ‘data–information–knowledge–wisdom (DIKW) pyramid’ of information systems theory. In this piece we therefore connect our argument to themes from our recent book, beginning with the constructivist learner-cantered concept of ‘deep learning’, which long predates and far surpasses the selective AI-related use of the term. As scholars such as Marton and Säljö (1976) have shown, a clear distinction can be made between ‘surface learning’ (focused on content acquisition or skill reproduction in formal education – and related views of lifelong learning as mere accumulation or ad hoc imitation), and ‘deep learning’ – which is inherently transformative and grounded in experience, reflection, and the problem-solving of everyday life.

This deeper mode of human understanding and knowledge-building is, we submit, rooted in the innate blueprint of the human lifecycle (as a self-organising ‘natural system’ capable of reflection on direct experience) and its four key stages of lifelong learning and ageing. This is recapitulated by all humans, though never by machines, and always through and within particular cultural and social contexts. Our model adapts Erik Erikson’s (1998) later recognition that his stages of lifelong development—trust in childhood, identity in youth and early adulthood, and integrity in mid-life—culminate in a final confrontation with the ‘death, not just mortality’ crisis.

In contrast to this emergent constructivist notion of ‘human deep learning’ the AI-related definition of deep learning refers only to an advanced subset of machine learning. It is typically defined as the use of multi-layered artificial neural networks to detect patterns in vast quantities of data. While no doubt impressive as a basis for generative AI technologies, this remains analogous to ‘human surface learning’ – ultimately superficial, descriptive and imitative as a mode of learning for development or of knowledge production.

The same limitations apply to the DIKW Pyramid, introduced in Lucky’s 1989 book appropriately titled ‘Silicon Dreams’. This information-systems model is often misused to imply universal categories of knowledge and wisdom, though ‘wisdom’ is typically reduced in this model to a metaphor of merely instrumental efficiency. Just as AI — derived from ‘machine learning’ — cannot meaningfully replicate human trust, creativity, or integrity, so too the DIKW framework without context or direct experience pales in comparison to actual transformative human knowledge-building. The latter rather involves a distinct capacity for innovative problem-solving, experiential insight, and traditions of knowledge and wisdom cultivated from human lifecycle learning.

SUMMARY :

As our two related books go on to discuss, the most profound mode of human knowledge-building grows from the humility of ‘wise ignorance’ — a recovered application of Socrates’ ‘elenchus method’. It likewise then aligns with the great humanist philosopher Paul Ricoeur’s related distinction (e.g. Ricoeur, 1976) between naïve and dialogical ‘critical thinking’ (and related concepts such as ‘explanation versus deep understanding’) inspired by Socrates. These further correspond to the constructivist distinction between surface and deep learning. As such, this framework—together with the idea of human lifecycle learning—offers a powerful antidote to uncritical or naively enthusiastic misuses of AI (and the DIKW Pyramid, etc.), while acknowledging its legitimate value as ‘a technological tool’ able to assist with certain activities.

NOTES :

  1. The thoughts here mainly relate to my recently completed book The Four Stages of the Human Lifecycle Revisited: Optimal lifelong learning from experience, reflection and ‘inner wisdom’. It will be initially self-published 1 January, 2026, on amazon.com to ensure that it is immediately, directly and accessibly available to anyone interested at near cost-price [free advance copies are available for possible review if you email me directly].
  2. This piece also anticipates an upcoming section of another book I am currently completing (titled Words, Ideas, and Optimal Knowledge-Building: A ‘foolproof’ self-help guide to academic (and all other) thinking, writing, and problem-solving inquiry) which should be published in mid-2026. In that I point out how the methods that can be used to ‘optimize’ the linked processes of academic writing and research (i.e. the pivotal importance of a central focus problem/question to organize the overall design of the main parts and key stages of the process – including better linking of what should be the related concepts of the ‘literature review’ and ‘methodology/data analysis’ sections) are typically what are lacking in the outputs produced by the AI apps or program making such promises as ‘a written PhD for you within a week’. Such AI-produced dissertations or papers may superficially impress (or at least the literature review sections of these might). But such outputs inevitably lack the ‘inner integrity’ (or overall ‘cohesion, coherence and relevance’ in context) as well as ‘authorial integrity’ of a work claiming to be a demonstration of ‘some original contribution to human knowledge’.

REFERENCES :

  • Erikson, E. (1998). The Life Cycle Completed (Extended Version). W.W. Norton.
  • Marton, F. & Saljo, R. (1976). On qualitative differences in learning: British Journal of Educational Psychology, 46(1), 4-11.
  • Peterson, A. (2025). AI and the problem of knowledge collapse. AI & Society, 40 3249-3269.
  • Stucky, R. (1989). Silicon Dreams: Information, man, and machine. New York: St. Martin’s Press.
  • Ricoeur, P. (1976). Interpretation Theory: Discourse and the surplus of meaning. Fort Worth: Texas Christian University. Press.

Pin It on Pinterest