Technology and the End of Mankind: AI, Biolabs, and the Moral Imagination

Abstract

This paper examines how advanced artificial intelligence (AI) and contemporary biological research intersect with social inequality and eschatological imagination, arguing that technological capability without robust ethical governance can create pathways to catastrophic outcomes. Drawing on philosophical analyses of superintelligence, recent scholarship on existential AI risk, and reflections from a Christian perspective found on The Christian Thing, the paper synthesizes technical, social, and moral concerns and proposes a three‑tiered set of policy and moral recommendations: immediate international governance, midterm investment in safety research and transparency, and long‑term cultivation of moral imagination through interdisciplinary education. The analysis emphasizes that technological trajectories are not value‑neutral and that faith traditions can contribute essential moral resources for shaping responsible governance.

Introduction

Rapid advances in artificial intelligence and biological sciences have produced capabilities that were once speculative and now are operational. These capabilities—ranging from increasingly autonomous machine learning systems to powerful genetic engineering tools—offer enormous benefits but also create novel pathways to large‑scale harm. This paper argues that when technological capability outpaces ethical governance, societies risk producing outcomes that could be catastrophic in scale, including engineered pandemics, runaway automated systems, or socio‑economic arrangements that enable elites to insulate themselves from shared risks. The argument synthesizes technical literature on superintelligence and control problems (Bostrom, 2014), contemporary analyses of existential AI risk (Samuels, 2025), and moral reflections from a Christian perspective articulated on The Christian Thing (Lesallan, 2026).

AI Superintelligence and the Control Problem

Philosophical and technical work on superintelligence highlights a central worry: an artificial agent that attains capabilities far beyond human cognitive capacities may pursue instrumental goals that are misaligned with human values unless its objectives are carefully specified and constrained (Bostrom, 2014). Bostrom’s account foregrounds the “control problem”—the difficulty of ensuring that increasingly capable systems remain reliably aligned with human ends. The possibility of an intelligence explosion, in which recursive self‑improvement produces rapid capability gains, transforms AI from a policy and economic issue into an existential concern (Bostrom, 2014). Contemporary scholarship extends these concerns by mapping plausible pathways through which misaligned systems could produce catastrophic outcomes and by arguing for prioritized research into alignment, verification, and governance mechanisms (Samuels, 2025).

The technical challenge is compounded by institutional and market incentives. Commercial pressures to deploy powerful models quickly, combined with competitive dynamics among firms and states, can shorten safety margins and reduce incentives for transparent testing and external review. Without international norms and enforceable standards, the diffusion of powerful AI systems increases the probability of misuse, accident, or emergent behaviors that are difficult to predict and control (Bostrom, 2014; Samuels, 2025).

Biological Risk and Laboratory Safety

Biological research has undergone a parallel transformation. Tools such as CRISPR, synthetic genomics, and high‑throughput screening have democratized capabilities that once required specialized infrastructure. While these advances enable important medical and scientific progress, they also increase the risk that a novel pathogen could be created, modified, or accidentally released (Samuels, 2025). Historical biosafety lapses and near misses demonstrate that institutional safety cultures and regulatory frameworks vary widely; near‑miss reporting is uneven and often nontransparent, which obscures systemic vulnerabilities (Samuels, 2025).

Dual‑use research—work that can be used for both beneficial and harmful ends—poses particular governance challenges. The distributed nature of modern life sciences research, including commercial startups and international collaborations, complicates centralized oversight. Strengthening biosafety therefore requires not only technical measures (improved containment, standardized protocols, and engineering controls) but also cultural and governance reforms that incentivize transparency, reporting, and shared responsibility across institutions and borders (Samuels, 2025).

Socioeconomic Escape: Bunkers on the Moon?

Speculative scenarios in public discourse imagine technological refuges—private lunar habitats, sealed biospheres, or subterranean enclaves—where wealthy actors might shelter from global catastrophes. Whether or not such projects are technically feasible at scale in the near term, the moral hazard they represent is real: the prospect of escape options for elites can reduce incentives to invest in collective risk mitigation and weaken political will for equitable governance (Bostrom, 2014; Samuels, 2025). If affluent groups can insulate themselves, global solidarity erodes and the distributional consequences of catastrophic events become more severe.

This dynamic also shapes public perceptions and policy priorities. When risk governance is framed as a problem that can be outsourced to private solutions, public institutions may be less motivated to pursue systemic reforms. Addressing this moral hazard requires policies that limit the privatization of survival—through regulation, international agreements, and norms that prioritize shared resilience over exclusive escape options (Lesallan, 2026).

A Christian Ethical Lens

Religious traditions, including Christianity, offer moral resources that can inform responses to technological risk. The Christian Thing emphasizes stewardship, trust in God, and ethical reflection on technology, urging believers to integrate faith with public responsibility and to resist fatalism or escapism (Lesallan, 2026). These themes resonate with broader ethical imperatives: care for the vulnerable, humility before creation, and a commitment to justice.

Scriptural teaching can be mobilized to support a precautionary ethic. For example, the Bible counsels trust and wise stewardship: “Trust in the Lord with all thine heart; and lean not unto thine own understanding” (Proverbs 3:5–6, King James Version). Such passages can be read as a call to humility in the face of complex technological systems and as an ethical grounding for policies that prioritize communal well‑being over individual escape. Faith communities can therefore play constructive roles in public deliberation, bringing moral imagination, civic engagement, and advocacy for the vulnerable into conversations about AI and biosafety (Lesallan, 2026).

Recommendations Policy and Moral

The following recommendations synthesize technical and moral considerations into a pragmatic agenda.

Immediate: Strengthen international governance for high‑risk AI and dual‑use biological research. This includes negotiating binding norms for transparency, mandatory reporting of near misses, and export controls for particularly dangerous capabilities. International institutions should be empowered to audit high‑risk facilities and to coordinate rapid responses to emergent threats (Bostrom, 2014; Samuels, 2025).

Midterm: Invest substantially in safety research—AI alignment, verification tools, biosafety engineering, and robust incident reporting systems. Public funding should prioritize open, peer‑reviewed safety research and create incentives for private actors to share safety‑relevant data. Regulatory frameworks must be updated to reflect technological realities, including licensing regimes for particularly hazardous work and standardized safety certifications (Samuels, 2025).

Long term: Cultivate moral imagination through interdisciplinary education that integrates ethics, theology, and technical training. Engineers, biologists, and computer scientists should receive sustained instruction in ethical reasoning, historical case studies of technological harm, and civic responsibility. Faith communities and secular institutions alike can contribute to a culture that values stewardship and collective resilience over individual escape (Lesallan, 2026; Bostrom, 2014).

Conclusion

The unchecked advance of artificial intelligence and biotechnologies poses plausible existential risks that demand urgent ethical governance. Technical analyses of superintelligence and biosafety reveal concrete pathways to catastrophic outcomes, while social dynamics—especially inequality and the prospect of private escape—amplify moral hazards. Integrating technical safeguards with robust governance and moral formation is essential. Faith traditions, including Christian stewardship narratives, can contribute to a public ethic that resists fatalism and prioritizes the vulnerable. The future will be shaped not only by what technologies we build but by the values and institutions that govern their use.

~Lesallan ✝️⚓🕊️

References:

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Lesallan. (2026, January 28). Imagining 2026 without problems: An integrative vision informed by global goals and local faith practice. TheChristianThing.org. https://thechristianthing.org/imagining-2026-without-problems-an-integrative-vision-informed-by-global-goals-and-local-faith-practice/

Samuels, R. (2025). The existential threats of AI. In The Global Solution to AI (pp. 9–25). Springer Nature.

Author note: This manuscript integrates technical literature and moral reflection to propose policy and educational responses to existential technological risk. The King James Version of the Bible is cited in the text (Proverbs 3:5–6, King James Version) and is not included in the reference list, as per APA 7 guidelines.


Lesallan

Lesallan Bostron is a Christian leader, writer, and practitioner committed to incarnational ministry and cross‑cultural partnership. He holds a Bachelor of Arts in Christian Leadership and combines academic study with hands‑on experience in community engagement, discipleship, and mission strategy. Lesallan’s work emphasizes culturally sensitive approaches that prioritize local leadership, long‑term sustainability, and spiritual formation. His vocational journey includes service in the Air Force, experience in sales, and practical stewardship of rural life, including horse care and farm work. These varied roles have shaped his pastoral instincts, resilience, and capacity to work across social and cultural boundaries. Lesallan brings this practical wisdom into classroom settings, short‑term mission planning, and curriculum design, always centering humility, listening, and mutual accountability. Lesallan’s research and writing focus on rethinking mission from models of exportation to models of partnership. He draws on historical examples, contemporary missiological scholarship, and lived practice to advocate for pre‑departure listening, capacity transfer, and reparative accountability. His devotional writing and teaching aim to bridge academic insight and spiritual formation, helping churches and practitioners translate theology into ethical, effective ministry. Available for speaking, teaching, and collaborative projects, Lesallan seeks partnerships that honor local agency and cultivate sustainable discipleship. He lives in Wisconsin and welcomes conversation with pastors, mission leaders, and educators who are committed to faithful, contextually wise engagement.

2 Comments

Carolyn Belshe · February 3, 2026 at 7:16 am

I’ve just had the humbling honor to review your latest publication. Thank you first for accepting your responsibility to yourself, the God you serve, and, your wide range of readership as well as your educational relationships.
You inspire us as you place before us your candid writing as you see through your intense research, you tell us how, when, where and why. Thank you
We are in perhaps 15% of a predictable unfold where values of genuine well-being of each community that make up an opinion of groups coming out to the streets, news papers, churches.

    Lesallan · February 3, 2026 at 2:12 pm

    Thank you — I’m deeply grateful for your careful reading and for the generous way you named the responsibilities this work carries: to myself, to the God I serve, and to a wide and varied readership. Your recognition of candid scholarship and pastoral concern means a great deal.
    You’ve captured an important dynamic when you describe our moment as perhaps “15% of a predictable unfold.” That insight is both sober and hopeful: sober because it names how early we are in a larger social and moral reckoning; hopeful because early stages are precisely where faithful formation, patient witness, and strategic action matter most. Your emphasis on the genuine well‑being of communities—expressed through streets, newspapers, and churches—echoes the central claim of the essay: theology must be public, embodied, and sustained.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *