I helped design the rocket engines for NASA’s space shuttles. Here’s why businesses need AI as trustworthy as aerospace technology



When I was an aerospace engineer working on NASA’s space shuttle program, trust was critical to the mission. Every bolt, every line of code, every system must be carefully verified and tested, or the shuttle will never leave the launch pad. When their missions are over, astronauts will walk through the office and thank the thousands of engineers who made it possible for them to return safely to their families—such is the trust and safety ingrained in our systems.

Despite the saying “move fast and break things”, technology shouldn’t be any different. New technologies require building trust to accelerate growth.

By 2027, about 50% of enterprises expect to deploy AI agents, and McKinsey The report predicts that by 2030, up to 30% of work will be performed by artificial intelligence agents. Many cybersecurity leaders I spoke with want to introduce AI as quickly as possible to support the business, but they also recognize that they need to ensure these integrations are done safely and reliably with the right guardrails in place.

For AI to deliver on its promise, business leaders need to trust AI. This won’t happen by itself. Safety leaders must take a page from aerospace engineering and build trust in their processes from day one or risk missing out on accelerated business growth.

The relationship between trust and growth is not theoretical. I’ve been through it.

Build a business based on trust

After NASA’s space shuttle program ended, I founded my first company: a platform for professionals and students to showcase and share evidence of their skills and abilities. It’s a simple idea, but one that requires our customers to trust us. We quickly discovered that universities would not work with us unless we demonstrated that we could handle sensitive student data securely. This means providing assurance through a number of different avenues, including showing a clean SOC 2 certificate, answering lengthy security questionnaires, and completing various compliance certifications through painstaking manual processes.

This experience shaped the founding of Drata, where my co-founders and I set out to build a layer of trust between great companies. By helping GRC leaders and their companies demonstrate and demonstrate their security posture to customers, partners, and auditors, we remove friction and accelerate growth. Our rapid growth in annual recurring revenue from $1 million to $100 million in just a few years is proof that the business has seen the value and is slowly starting to shift from viewing GRC teams as cost centers to business enablers. This translates into real, tangible results—we’ve seen security teams drive $18 billion in security impact revenue using our SafeBase Trust Center.

Now, with artificial intelligence, the stakes are even greater.

Today’s compliance frameworks and regulations (such as SOC 2, ISO 27001, and GDPR) are designed for data privacy and security, not for artificial intelligence systems that generate text, make decisions, or act autonomously.

Thanks for legislation like this California’s new artificial intelligence safety standardsregulators are slowly starting to catch up. But simply waiting for new rules and regulations isn’t enough, especially when businesses rely on new AI technologies to stay ahead of the curve.

You don’t launch an untested rocket

In many ways, this moment reminded me of the work I did at NASA. As an aerospace engineer I have never “tested in production”. Every shuttle mission is carefully planned.

Deploying AI without understanding and acknowledging its risks is like launching an untested rocket: the damage can be immediate and end in catastrophic failure. Just as failed space missions reduce trust in NASA, missteps in using AI without fully understanding the risks or applying guardrails can reduce consumer trust in the organization.

What we need now is a new operating system of trust. To put trust into practice, leaders should develop a plan to:

  1. Transparent. In aerospace engineering, thorough documentation is not bureaucracy but the power of accountability. The same applies to artificial intelligence and trust. From policy to control, from evidence to certification, traceability is required.
  2. Continuous. Just as NASA continuously monitors its missions around the clock, businesses must approach investing in trust as an ongoing process rather than a point-in-time checkbox. For example, controls need to be continuously monitored so that audit preparation becomes more of a state of being rather than a last-minute sprint.
  3. Autonomy. Today’s rocket engines can manage their own operations through embedded computers, sensors and control loops, eliminating the need for pilots or ground crews to directly adjust valves in flight. As artificial intelligence becomes more prevalent in everyday business, so must our trust plans. If humans, agents, and automated workflows are to conduct transactions, they must be able to deterministically and unambiguously verify trust on their own.

When I think back to my spaceflight days, what is most striking is not only the complexity of space missions, but also their interdependence. Tens of thousands of components built by different teams must work together flawlessly. Each team trusts that the other teams are working effectively, and decisions are documented to ensure transparency across the organization. In other words, trust was the layer that held the entire shuttle program together.

The same is true for artificial intelligence today, especially as we enter the nascent era of agent artificial intelligence. We are moving to a new way of doing business, where hundreds (and potentially thousands in the future) of agents, people, and systems are constantly interacting with each other, generating tens of thousands of touchpoints. These tools are powerful and the opportunities are huge, but only if we can earn and maintain trust in every interaction. Companies that create a culture of transparency, sustainability, and autonomy will lead the next wave of innovation.

The future of artificial intelligence is already being built. The question is simple: will you build it on trust?

The views expressed in Fortune opinion pieces are solely those of the author and do not necessarily reflect the views and beliefs of: wealth.

Fortune Global Forum Will return in Riyadh on October 26-27, 2025. CEOs and global leaders will come together for a dynamic, invitation-only event to shape the future of business. Apply for an invitation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *