What might the intersection of neuroergonomics, Generative AI, and Data Science look like?
Over the past twenty years, tentative and early work at the intersection of neuroscience, cognitive science engineering and human factors has borne fruit as a field known as neuroergonomics has emerged. Neuroergonomics was initially proposed by Parasuraman (1998), and subsequently formalized in Parasuraman & Rizzo (2006). Together, they proposed the examination of brain mechanisms and underlying human–technology interaction in increasingly naturalistic settings, the latter of which are representative of work and everyday-life situations (Ayaz & Dehais, 2018). The discipline has therefore been defined as the “scientific study of the brain mechanisms and psychological and physical functions of humans in relation to technology, work and environments”. Such study is with a view to better understanding the relationships between neuroscience and perceptual, cognitive and motor functioning.
The work of our team on Learning at the intersection of AI, physiology, EEG, our environment and well-being (the Life2Well Project) represents our vision of the relevance of neuroergonomics to teaching and learning.
This year, we started thinking about how learners might use the Internet of Things (IoT), Data Science and Generative AI to understand their own emotional states better with a view to developing empathy and environmental stewardship.
We will share our ideas in November 2023, when i am greatly privileged to be invited as a panelist at Empowering Minds: A Round Table on Generative AI and Education in Asia-Pacific. My hosts are the UNESCO Multisectoral Regional Office in Bangkok (UNESCO Bangkok), in collaboration with The Southeast Asian Ministers of Education Organization (SEAMEO).
The way we conceptualise this intersection affords learners a high degree of agency, and - consistent with our team's focus on learner intuition - affords authenticity and embodiment through the use of photographs taken by the learners themselves from their local environment and lived experience.
Learners exercise their intuition and decide which parts of the source image will likely be affected by a phenomenon of their choice.
Learners will likely differ in which parts of the scene may be affected by the phenomena in question;
but they will still be surprised / impressed / distressed by the depiction of the post-phenomena scene.
Seeing for themselves / learning about their own physiological and neurological responses can be a learning goal in itself.
Just as significant is the potential for such a pedagogical approach to evoke empathy and prosocial values, which is not possible with the use of normal images:
- stock: not authentic;
- non-stock: not predictive.
Our results are presently available as a pre-print, and we invite you to browse our manuscript :-)
Our approach has the following advantages:
- it is applicable to wide range of phenomena (eg, social cohesion / social unrest, climate change, urban growth, local environmental hazards);
- advantages over AR / VR include affordability, scalability, robustness, and learner agency vs didactic visualisation.
In summary, through our approach, learners can use the Internet of Things, Data Science and Generative AI:
- to understand their own emotional states better
- with a view to developing empathy and environmental stewardship.
Our approach uses Generative AI as a tool to create:
- emotionally strong stimulus of hypothetical scenario (for which no stock image exists)
- of relevance to the learner to provoke an emotional response
- for the students to
- learn about their own physiological and neurological responses; and
- evoke empathy and prosocial values for which normal images would be less effective.