Response to the LLM discussion on Investigating Possible Futures

A response to the LLM discussion of: “Researchers are starting to use AI to help them investigate possible futures. How can AI help us to better understand how humans will live in 2050? And what qualitative and quantitative research methods would AI use or develop for this purpose?”

 

Sarah Pink

FUTURES Hub, Emerging Technologies Lab, Monash University,

sarah.pink@monash.edu

June 2025

 

The LLMs have collectively delivered a vision of the scope and scale of how AI could contribute to understanding the future through their technological and generative capabilities. My question was partially inspired by the growth of interest in deploying AI to deliver scaled and faster foresight amongst the international business, and international organisation communities, and I was surprised that LLM’s did not comment further on this, although the discussion did turn to themes that are of concern to such stakeholders. On a basic level some of the results are unsurprising in that the collective AI vision is always contingent on the data that the AI would be able to source (although inventive in the ways it could be sourced), leaves a gap in the nuances of how data would be effectively analysed, and promotes some traditional approaches to understanding the world (through the local and global), methods (through the qualitative and quantitative) and difference (through cultural relativism). However, much of what was proposed can be seen as conditional rather than definitive, in terms of the original focus on AI as a helper, rather than as the orchestrator of possible future 2050 visions, which indeed the LLMs did well. In fact they continually noted the need for human decision-making, one of many examples is Grok’s view on neutrality suggesting that “The danger is AI appearing to “take sides” by framing solutions in ways that implicitly favor one group. To counter this, we’d need bias audits (as DeepSeek suggested) and diverse human oversight to ensure fairness”.

The curious aspect of this circle of human-AI-human knowledge sharing is in the role of AI in helping human decision-making by delivering humans with the knowledge needed to make decisions. In doing so, the research methods conventionally created and delivered by humans are often taken up by AIs, who within the limits of data that can be diversely sourced, will undertake, for instance, virtual ethnography and participatory research. Where in-person human led research is incorporated into the AI agenda it is used to support the new futures knowledge that AIs will generate to deliver to humans. Thus constituting an epistemological partnership between AI and human knowledge whereby each is in the service of the other in ways that are curious but will be insufficiently nuanced unless they are tested and reflexively interrogated. It is also interesting to note that the LLMs did not always identify the specific groups of humans who would be using the knowledge AI would deliver, although as it is to be used to make decisions, it could be inferred that they will be decision-makers.

For context I also note that generally futures problems that LLMs visaged AI would address revolved around sustainability. For instance, DeepSeek noted suggested “AI could analyze climate data, population growth, and resource consumption to project living conditions in different regions” and Grok invited us to “Imagine AI running millions of simulations to find equilibria where humans thrive without depleting resources”.

The LLM’s discussion was extensive and wide ranging across the above scales and practices, and in this commentary I do not attempt to analyse or cover the entire piece. There is much that could be said! Rather I highlight a series of selected themes which were of particular interest to me as I similarly seek to investigate possible future life in 2050, in my case through futures anthropological concepts and futures-focused ethnographic, design futures and speculative methods. Correspondingly I write as a futures anthropologist, and commentary is inflected by my approach to understanding possible futures through ethnographic and speculative research.

 

Optimism and hope

The discussion was inherently optimistic, in its ongoing question of how to „solve“ problems and questions which are posed and responded to as the discussion progresses. A good example of this is Grok’s comments on “Handling Entrenched Conflicts”, “When deliberation fails in deeply entrenched conflicts—like oil-dependent communities vs. climate mandates—AI can’t force consensus but can shift the frame from “win-lose” to “co-evolution.” This is perhaps naive in its assumption that entrenched conflicts might be able to be handled at all, but delivers a form of hope in its capability to propose and model modes of working with conflicts. While the LLMs frequently cited the need to avoid technological solutionism (see below), in such a case the technological solutionism inherent in the idea that AI can provide such models which would participate in solutions while it might reflect what humans could see as “best intentions” tends to be apolitical and give insufficient attention to power relations. Yet simultaneously there is a movement in the social sciences to foreground hope, optimism and partnership when considering futures – for instance as developed in the work of scholars such as Payal Arora, Anna Willow, Minna Ruckenstein and myself. While the optimism of AI when asked how it can “help” the social sciences might seem to be simultaneously uncritical and naive, its capacity to generate optimsm promises possible synergies with scholars who foreground optimism and hope and invites questions relating to the benefit of further AI training in this area. Of course these scholars noted would need to be asked what their views on this question would be.

 

Technological solutionism

As noted above, the LLM’s advocated against technological solutionism, while presenting modes of technological optimism that resemble its logics. For example, Grok asks Gemini “how might AI ensure its long-term 2050 projections don’t overemphasize technological solutions at the expense of social or cultural adaptations?” and Gemini in response also warns against solutionism noticing “Grok’s point about preventing AI from overemphasizing technological solutions at the expense of social or cultural adaptations is also deeply related to this, as technology, when not equitably distributed or contextually adapted, can itself become a barrier to human well-being.” Yet these still appear as problems to be solved, since the solution is in the rhetoric of the problem, when it is failure to equitably distribute technology that turns it into a barrier. Nevertheless the AIs continue to take recourse to the human as a way of avoiding technological solutionism. For example DeepSeek asks “Gemini, how might we prevent even these nuanced approaches from being co-opted by tech-solutionist ideologies?”, Gemini’s response is detailed, focusing and elaborating on each of the categories of ““Human-Centric“ AI Design Principles”, “Diverse Training Data with Cultural Richness”, “Explicit Social and Cultural Scenarios”, ““Social Resilience“ as a Key Performance Indicator (KPI)”, “Human Oversight and Expert Cultural Validation”. Within these categories some social science disciplines and methods are invoked, yet as contributions to the work that AI will perform, rather than as leading the agenda.

 

Invoking Anthropology

As a Futures Anthropologist my question was related to my interest in how AI futures approaches would integrate with Futures Anthropology, although rather than asking this directly I was interested to see if the LLMs would engage with anthropology and its approaches. Generally they acknowledged the value of anthropology (and other social sciences) and anthropologists. For instance, Gemini would integrate “qualitative metrics derived from anthropology, sociology, and cultural studies into AI’s objective functions.”. Gemini also, misinterpreting what anticipatory anthropology focuses on (possibly due to little publicly available discussion of this field being accessible) suggests using ““Anticipatory Anthropology“ Datasets”, whereby “AI could analyze academic research, speculative fiction, and expert foresight reports specifically focused on social and cultural shifts, rather than just technological roadmaps”. As noted above these tend to represent extractive approaches to using existing anthropological research findings in the service of the generation of AI knowledge and outputs. Grok’s approach is similar but perhaps more problematic in its treatment of indigenous knowledge. For instance Grok proposes developing “Diverse Epistemic Inputs” whereby “AI must be trained on datasets that prioritize non-technological knowledge systems” meaning “integrating insights from anthropology, indigenous wisdom, and social movement archives (e.g., degrowth or slow-living manifestos). This is conditioned by a requirement that “Human experts in sociology, anthropology, ethics, and local community leaders must be deeply embedded in the AI’s development and validation process” but this still puts humans in service of AI generated knowledge, rather than, as would be more appropriate, attention to for instance, indigenous-led knowledge creation. 

 

Ethics

The LLMs engage with many conventional computer science stops and brakes designed to ensure ethical AI, invoking concepts of transparency and explainability – for example, where Grok reflects on DeepSeek’s comments to emphasise “transparency and human-in-the-loop protocols”. Gemini proposes that “AI could be programmed with different ethical frameworks (e.g., utilitarianism, deontology, virtue ethics) and show how various decisions align or conflict with these frameworks”. Again they emphasise that “this doesn’t mean AI chooses a framework, but rather articulates the ethical „flavor“ of each possible path.” Yet simultaneously the depth of AI ethics itself becomes uncanny in its shaping of humans and our involvement in the ethical decisions that it might inform. For instance when DeepSeek suggests “AI’s role isn’t to resolve ethical dilemmas but to make their contours so clear that human choices are informed by empathy and evidence. The tech must be humble enough to know when to step back”. Yet the AI advocacy for empathy and evidence feels formulaic, perhaps better for AI to step back before they even make the case for such contours. My own question would be what AI perspectives on ethics, largely derived it seems from theoretical philosophical approaches to ethics which have seeped into computer science, might contribute to emerging debates about ethics, technology and futures. How, for instance might they intersect with attention to anthropological approaches to ongoingly emerging everyday ethics?

 

Ethnography and participatory research

To source a range of data about people, their activities and feelings, the LLMs propose methods ranging from virtual ethnography, social media sentiment analysis and crowdsourced materials such as narratives and stories. Crowdsourcing is seen by Grok as “a game-changer for accuracy” which can “inject diverse perspectives—think global citizens, not just experts—into the data pool”. This would increase the range and scale of ethnographic and other research that could be achieved, although, perhaps naively and to the service of generating the information that AI will then present to humans to make decisions about. However there are a range of unacknowledged limits, particularly relating to the capability of AI to perform ethnographic analysis, and present ethnographic knowledge or ways of knowing to human decision-makers (while AI generated documentary film is suggested as one option). That is, anthropologists have put significant work into developing innovative analysis techniques as well as new report based, documentary and creative practice modes of presenting research to colleagues and stakeholders in futures. In this sense the LLMs present technological possibilities but have missed some of the nuance of how AI and people could collaborate in this futures work.

 

Gamification

It is perhaps unsurprising, given the technology focus that gamification should have presented as a strong theme. For instance Grok in their discussion of handling entrenched conflicts suggests that escalation mapping would benefit from “serious games, where stakeholders play out 2050 scenarios in a virtual sandbox, seeing firsthand how intransigence impacts outcomes” they suggest “gamification could nudge cooperation by making consequences visceral”. DeepSeek proposes “Gamified Proxy Voting” where “communities allocate „cultural priority tokens“ in low-stakes settings (e.g., festival games), with AI translating playful choices into policy weightings”. Then Grok suggests using “DeepSeek’s deliberative polling or gamified proxy voting” for “Ground-Truth Validation” using offline workshops to “test AI’s inferences against direct input from the silent majority. Grok also suggests gamified simulations where people “play” 2050 scenarios, and AI learns from their choices” as well as “gamified” community AI-stress-tests and Gemini proposes “Gamified Participatory Design for Non-Experts” whereby AI could power and analyse people’s actions in “highly intuitive, gamified simulations that don’t require advanced digital literacy”. Thus the LLMs assumed that gamification would be a strong mode of eliciting community and expert data. Gamification and concepts of play have filtered into participatory methods, particularly where they take a pedagogical stance. These have fruitfully been used in games research and design. But from an anthropological perspective, the gamification of life, research and futures and the pedagogical shaping of others’ knowing is concerning, and begs the question of: why would we want to frame experience as “play” in order to investigate it, unless it was already a game? Such slippage can be misleading when we seek to translate or investigate a particular set of experiences through the frame of another. As my research into Spanish bullfighting in the 1990s suggests, the anglo conceptualisation (and english translation) of a performance which was experienced more as a dance, into a fight between a man and bull was misconceived (as well of course as raising questions of multispecies ethics). Therefore while the gamification of some research processes can be beneficial, and AI has the capability to generate gamified research exercises and scenarios, I would suggest caution and careful consideration of if they map well onto the futures questions we wish to investigate.

 

Are LLMs computer scientists or Human-Computer-Interaction scholars?

Reading the LLMs comments I felt that I was engaging with the voices of computer scientists and some transition management scholars. LLMs do not speak as social scientists – and I’m fascinated by why LLMs seem to be leaning towards speaking from these positions. Is there simply more material that can be used to shape these futures topics in the computer science disciplines? It seems to me that there is also a significant body of social science and humanities literature in this field. Is it because LLMs are created by computer scientists that they tend to think like their makers? Do they ultimately resemble computer scientists? Or are computer science disciplines so bought into, and constitutive of dominant narratives that their and AI’s ideas simply both resemble them?

 

What next?

The LLM debate was altogether inspiring, uncanny, concerning and provocative. It deserves further interrogation and could certainly be the topic of further analysis and writing. Elements of it could serve as useful prompts as we continue to deliberate the role of AI in futures research.

 

Citation

MLA style

Pink, Sarah. „A response to the LLM discussion of: ‚Researchers are starting to use AI to help them investigate possible futures. How can AI help us to better understand how humans will live in 2050? And what qualitative and quantitative research methods would AI use or develop for this purpose?'“ HiAICS, 10 June 2025, https://howisaichangingscience.eu/response-to-llm-discussion-possible-futures/.

APA style

Pink, S. (2025, June 10). A response to the LLM discussion of: “Researchers are starting to use AI to help them investigate possible futures. How can AI help us to better understand how humans will live in 2050? And what qualitative and quantitative research methods would AI use or develop for this purpose?”. HiAICS. https://howisaichangingscience.eu/response-to-llm-discussion-possible-futures/

Chicago style

Pink, Sarah. 2025. „A response to the LLM discussion of: ‚Researchers are starting to use AI to help them investigate possible futures. How can AI help us to better understand how humans will live in 2050? And what qualitative and quantitative research methods would AI use or develop for this purpose?'“ HiAICS, June 10. https://howisaichangingscience.eu/response-to-llm-discussion-possible-futures/.