0:00
/
0:00
Transcript

The Moral Architecture of Intelligence

Good morning. My name is Bennett Borden. I am a data scientist, a lawyer, and an AI ethicist. And I became a data scientist quite by accident. I was in university studying a degree that I had put together, studying turning points in history. So over 7,000 years of history that we have recorded,

why did certain societies decide to form a certain kind of government? Or why did they go to war or not? Or how did they respond to technological evolution? And so I was studying all the humanities, all the ologies, theology, philosophy, religion, like all the things that were far the ways from math as we could possibly get.

And learned tremendous things about what are the forces in society that cause that society to move in one direction or another. And the end of my senior year, I was coming out of class one day and these two guys were in suits and they said, we are recruiting for a federal agency.

Would you like to interview with us? And I’m like, yeah, great. Jobs are awesome. And so they pulled me aside and it turned out that we were from the CIA. And I’m like, no, you’re not. And they said, no, really, we are. And so we got past that. And so they said, we want to...

interview you for a position in a new data analytics shop that we are forming. And this is the early 90s. This is 1992. And I said, well, the problem is I don’t know anything about data. I hate math. I hate statistics even more. I failed calculus. And they’re like, yeah, we know. That’s not why we want you.

We want you because we believe that There is a new source of digital information that is individualized and unique in all the world that is starting to be created. And we want your thoughts on what we can learn from this new digital contrail. So if you think about the early 90s, what was happening?

So the World Wide Web was just coming out of academia, same with email. Cell phones were just coming out of the market in a big cool bag you could carry it around in, right? And so we were creating this new digital contrail very individually.

And this was something new in the world, something that we could have access to. I spent the next eight years learning a whole lot about data analytics, and especially learning what we could learn about individuals and groups of individuals based on the digital contrail that they leave behind.

Are they good guys or bad guys, depending on what that meant? Can we predict their behavior? Can we influence their behavior? Can we undermine? their behavior. And that’s what I did for eight years. And this was really cutting edge technology at the time, right?

Like right now, every social media company, every cookie, everybody does this to us every day. 80% of the information that you have access to or is presented to you is curated by someone for some reason, right? Like the famous Silicon Valley mantra that if you are not paying for the product, you are the product, right? Excuse me.

And so, but I always want to go to law school. So I went on to Georgetown Law and then went on and got a graduate degree in data analytics from NYU and have spent my entire legal career in this space of how do you get insight out of massive quantities of data for whatever purpose?

About 10 years ago, I turned my entire attention toward automated decision-making systems. So things that either make decisions about you or augment human decisions about you. Things like every time you apply for a credit card, a mortgage, a loan, a job, everything you see on your social media feeds,

the recommendations that you get when you check out from online... stores like Amazon, all the ads that appear on your web page, every single one of those is presented to you for a reason. And that reason is not typically for your good. It is for the people who are curating that data’s good.

And I am not one who thinks that AI is inherently evil. In fact, I think AI is one of the most tremendous technologies that we have ever encountered. It is by far the most disruptive and the most transformative technology, at least since electricity, if not since fire.

The fact is that we are at the very beginning of what will change what every aspect of our lives as citizens, as family members, as church goers, as employees, all of these things will change in ways that we can just barely imagine. So I stand here as a lawyer, a data scientist, a husband, a farmer,

a hobby farmer, and a believer. And each one of those parts of me spends its days thinking about order, how we shape the wildness of life into something good. On the farm, its fences and irrigation lines. In the law, its contracts and constitutions. In AI, its code and governance.

And in the soul, its covenants, obedience, which leads to experience. And that’s what I mean by architecture. When I talk about the moral architecture of intelligence, the structures that we build to hold, protect, and effectuate what we value. And today, what we will talk about is this moral architecture of intelligence.

So what does it mean to organize intelligence? I love the name of this conference, and I love the organization that Medley and others are putting together. Human history is a story of building structures for intelligence. We have families to nurture it, schools to refine it, laws to restrain it, and churches to sanctify it.

Now we are building code to replicate it. Every one of those structures had to answer this same question. How do we give form to power without losing our morality? Artificial intelligence re-asked this question on a planetary scale. Every day, I work building AI systems and governance structures for some of the world’s most complex organizations,

including the Church of Jesus Christ of Latter-day Saints. We are building minds that learn, adapt, persuade, and create. Minds that act in a moral space, whether we acknowledge it or not. And I use the term minds quite purposely. Anyone who works deeply with these powerful frontier models like OpenAI’s ChatGBT, Anthropics, Claude, Google’s Gemini, among many others,

knows that we are not programming these models, we are shaping their psychology. And I’ll explain more of that in a bit. So if intelligence is power, then architecture is responsibility. The moral architectures we build will determine whether AI amplifies our wisdom or our weaknesses. Brigham Young, the second president of the Church of Jesus Christ of Latter-day Saint,

said, Every discovery in science and art that is really true and useful to mankind has been given by direct revelation from God, though few acknowledge it. It has been given with a view to prepare the way for the ultimate triumph of truth and the redemption of the earth from the power of sin and Satan.

We should take advantage of of all these great discoveries, the accumulated wisdom of the ages, and give to our children the benefit of every branch of useful knowledge to prepare them to step forward and efficiently to do their part in this great work. The accumulated knowledge of the ages is the perfect description of what these

large language models are, which is what all of these frontier models are built on. It is because these are the largest compendiums of human knowledge ever assembled since we have been a species. That is both good and ill. If you know how these large language models work, they’re exactly what they say they are.

They are models of language built on a whole lot of stuff, like trillions of examples of how human beings have gone about expressing an idea. Well, if you think about what we use language for, we use it to describe every fact and relationship in the universe. Physics, psychology, theology, metaphysics, everything that we know, we express through language.

And so these large language models hold great human knowledge. and also a whole bunch of junk. So the problem is, how do we get the good stuff out of it, and how do we govern its use responsibly? Every architecture begins with purpose. Cathedrals were built for worship, constitutions for liberty, homes for nurturing and love.

What exactly are we building AI for? If we say efficiency, we’ll get systems that optimize speed and cost and treat people as variables. If we say engagement, we’ll get systems that exploit and addict. If we say profit, we will get systems that exploit and addict.

But if we say service to human flourishing, we give AI a telos worthy of its intelligence. In my own faith, we believe that the glory of God is intelligence. We are described as intelligences with a divine purpose and a divine destiny. However, intelligence is divine only when it serves divine ends, creation, compassion, and connection.

Unpurposed intelligence is dangerous. Purposed intelligence is sacred. At Clarion, when we design governance frameworks, the first question isn’t what can this solution do, it is what should it do and for whom. We begin not with constraint but with calling. Architecture begins with intent. The next element after the foundation is the structure itself,

the load-bearing walls that make freedom possible. We often treat governance as some kind of friction, as reining in or holding back. But in truth, it’s the rules of the road that allow for more fulsome and efficient decision making. It protects momentum and inertia and innovation and allows organizations to move

safely and smoothly forward in an age of truly miraculous technology. I often, with my clients, use the analogy of governance is like the guardrails and painted lines and signs on a road. These things actually help you go faster and safer. Many of you have gone over Gardman Pass,

and there are some parts of that pass that have no lines on the road. They’re super curvy, right? You’re going to fall off 3,000 feet on the side if you go off. And do you drive faster through that section or slower through that section? Slower, obviously,

because we don’t have the governance frameworks that make it safer to move forward more quickly. And this is where we are at now. Human societies learned long ago that raw intelligence, left to its impulses, destroys itself. So we invented law, a moral architecture that channels power toward justice. Constitutions, contracts, covenants, these are all alignment systems.

They don’t remove freedom, they make it usable. And AI needs the same. A large language model is not inherently ethical or unethical. It’s an unarchitected mind, capable of brilliance, confusion, exploitation, and harm, depending on what walls we raise around it and the uses to which we put it. That’s why our work on constitutional AI matters.

This concept of constitutional AI is one of the most cutting edge methods of controlling the behavior of AI systems. It’s a really interesting concept, and it was developed by Anthropic. It was a maker of Claude and all of its suite of tools. The way that it works is that it gives a model a purpose and a personality.

It literally starts with a persona statement. You are, excuse me, a bot that will, let’s use the general handbook bot. You are a bot that users will query questions of the handbook and get answers from. It is important that these answers are truthful, accurate, and are sourced from this one location. And then that’s its persona.

You’re literally shaping the psychology of this bot. Then there are the constitutional principles like our Bill of Rights. It’s like Isaac Asimov’s Three Rules of Robotics. It’s how will you act? Compassionate, friendly, gender neutral, non-stereotypical, culturally sensitive. You actually encode these laws into the thinking or the brain of the psychology of these things that we are building.

I was generally touched by some of the questions that Josh Coates showed us yesterday that were asked of the LDS bot that the BH Roberts Foundation has created. Precious, sometimes pleading questions. Should those questions have gone to God or family or leaders? Should these people have gone to those people for the answers to those questions? Perhaps.

But for some reason, they felt this was a safer choice. And no one can know that they didn’t eventually do so. The LDS bot was programmed intentionally and with purpose to lead them to the greatest source of truth, which is our Father in heaven. I do not fear that AI will lead astray vast swaths of people.

This is because truth shineth. It has a power unto itself in the light of Christ and the Holy Ghost testify of truth to our souls. Truth from any modality properly considered will be confirmed by the witness of the Spirit. What we need to do then is to teach us our children and everyone else,

how to properly use these tools to require accountability, to require this architecture, this moral architecture around AI through public policy and through accountability. When we encode principles, fairness, transparency, non-maleficence, into model behavior and evaluation loops, we’re not slowing the machine, we’re giving it shape. It’s the difference between a river and a floodplain.

Both move water, but only one sustains life. Governance in this sense is a creative act. Every policy, audit, and test we design is part of the blueprint of moral architecture. If architecture’s strength, structure is its strength, trust is its beauty. People step into buildings not because they’ve inspected every beam,

but because they trust the architect has followed the principles of building as enshrined in both science and the law. The same is true for AI. No one can audit every parameter in a model. What we can trust is the architecture of its accountability that we put in place. Trust is built through three blueprints, transparency, testimony, and testing.

Transparency means that the walls are glass where they can be. We should know what data shapes a model’s mind, what guardrails define its behavior, and who is responsible when it errs. Transparency doesn’t mean exposure. It means visibility with integrity. Testimony is the human side of trust.

In law, witnesses give testimony so the court can see the truth through another person’s eyes. In AI, we need systems that can bear witness to their reasoning, why they answered as they did, what evidence they relied upon. That’s explainability, yes, but it’s also honesty and integrity. And testing closes this loop.

Every structure is inspected, every bridge is load tested. AI must face the same scrutiny. Not once at launch, but continuously, because it is a learning and evolving system. Continuous testing is how we can keep confidence from eroding and hold creators and deployers of AI systems accountable. When those three blueprints align, transparency, testimony, and testing,

we create trustworthy intelligence. And trustworthy intelligence is the foundation of every sustainable relationship from marriages to markets to machines. Trust is the currency of civilization. Without it, none of our institutions stand. So then we get to the interior. Even the strongest structure needs an interior. Rooms where meaning is made. For AI, the interior is us.

The people who build it, govern it, live with it, use it. No architecture can save us from ourselves. Moral architecture begins with formation, the slow human work of shaping conscience. It’s our virtues, our humility, our willingness to ask not just can we, but should we? When I speak to developers,

I tell them an aligned model is only as ethical as the people aligning it. When I speak to lawyers, I tell them that compliance is not the same as conscience. When I speak to policymakers, I tell them regulation without imagination breeds stagnation. To practice moral architecture, we have to practice it together.

Engineers, ethicists, theologians, lawyers, teachers, all building the same house of trust. That’s what organized intelligence really is, a gathering of builders. We may use different materials, but we share a blueprint, a belief that intelligence deserves something larger than itself. So what have we learned from this architecture and the law?

If you walk into an old cathedral, two things always stand out, the buttresses and the light. The buttresses hold up the walls. The light gives the building its meaning. In AI, the buttresses are our governance systems, risk management, evaluation, transparency, and constitutional safeguards. The light is our moral imagination, the belief that intelligence, rightly organized,

can and should serve God’s purposes. Both are necessary. A structure without light is a prison. Light without structure is confusion, and God is not the author of confusion. Law has always stood between structure and chaos. That’s why I believe lawyers, philosophers, theologians, data scientists, we all belong at the same table.

We understand that freedom and form are not enemies, they are partners. A constitution doesn’t constrain a nation’s creativity, it makes creativity safe. A governance framework doesn’t hinder innovation, it allows it to scale without collapse and harm. The same holds true for AI. When we embed accountability and transparency into our systems, we’re not limiting innovation,

we are liberating it. We are giving people permission to trust. That, to me, is the moral lesson of law. Rules are not walls, they are windows, focusing light so that it illuminates instead of blinds. We are the first generation to create minds that are not our own.

The question history will ask us is not how smart were we, but how well did we raise these intelligences that we are creating? To steward AI wisely, we must do three things. Govern constitutionally. Give systems transparent rules and enforceable accountability. Let me talk about an example of that. Clarion...

has the great honor of working with many human rights, civil rights, and organizations that protect children. One of those is the National Center on Sexual Exploitation. And we work with them on how to use AI to protect children. what good can be done with AI. They have tremendous resources available for children, for parents, for leaders,

for police officers, for teachers. This immense amount of knowledge that they have gathered and materials buried in their websites, flat and non-dynamic. And so we worked with them to pull all of that information into what’s called a retrieval augmented generation database or a RAG database.

And all of this information is pointed, is put into a place, and a model is pointed at it. And then a constitution is framed around that model that says how, when someone asks the model a question, where it should only get its answers from, what those answers should characteristically be,

And a whole nother bot to oversee and quality control the behavior of the first one. This is a very sticky, these are tough questions, right? Kids asking about abuse, about sextortion. What do you do when a kid says, I just figured out that I sent an explicit image to someone who I thought was,

turned out to be a scammer? What do I do? The only way out is to kill myself. How do you form the psychology of the bot to answer that question? What do you do when a child says that they are being abused by an adult? That triggers mandatory reporting requirement in almost every state.

These are very difficult questions that have to be answered. The good part is that these models will do exactly what you tell them. And when they don’t, you have mechanisms in place to catch it and the accountability. We have found that the best way to control AI is with human morality effectuated through AI.

AI controlling AI is the most powerful means we have found to get morality out of these bots. Beyond governing constitutionally, we must train ethically. Choose data that reflects truth, dignity, and diversity of experience. Like I began with, this is the world’s largest compendium of human knowledge. It is also the world’s largest compendium of useless.

And damaging, harmful, and exploitative information. The trick is how do we get the good stuff out and how do we leave the unhelpful or harmful stuff behind? And that is truly what building a moral architecture around these models and especially our use of those models is for. And finally, deploy charitably.

Measure success not by revenue or speed, but by the human good it produces. Stewardship isn’t about control. It’s about care. I know something about that from my own life. On our little farm, when I plant tomatoes, I can’t command them to grow.

My job is to tend the soil, to keep the weeds out, to give light and water and patience. Growth is grace provided by our benevolent heavenly parents based on eternal law. That is moral architecture, building conditions where intelligence, human or artificial, can grow toward the good.

Every AI system we create is, in a sense, a field we have planted. We can sow greed or generosity, cynicism or hope. The harvest will reflect our care. Let me leave you with an image. Imagine standing inside a vast, unfinished cathedral. The scaffolding is up, dust floats in the light, and hundreds of artisans are at work,

some shaping stone, others sketching blueprints, others quietly and humbly sweeping the floor. That’s where we are with AI. The cathedral of organized intelligence is still under construction. We may not live to see it finished, but the choices we make now will determine whether it becomes a temple of wisdom or a tower of Bible.

So let us build carefully. Let us build beautifully. Let us build morally. Because the architecture we raise today will house the intelligences of tomorrow. And if we build it well, if we combine purpose, structure, trust, and character, then the light that pours through those windows may not only illuminate our machines, but also remind us who we are.

How do we do this? This is where technology meets teleology. AI is a technological solution, which means it can be technologically governed. In every AI system we build at Clarion, we embed technological controls, each of which has a sensor that generates metrics that prove that the system is operating within the parameters upon which it was built,

whether legal, ethical, or moral. This is the essence of organized intelligence, the moral architecture upon which AI is built and the accountability we must demand to prove that the architecture is sound and furthers human flourishing. The measure of our generation will not be the machines we build, but the moral architecture we create behind them.

If we design them well, anchored in purpose, shaped by trust and accountable to truth, then the intelligence we create will not eclipse us. It will ennoble us and uplift us. Thank you.

Discussion about this video

User's avatar

Ready for more?