
I’m a founder who spends a lot of time working on humanoid robots. While today’s innovations are cutting-edge, most humanoids today are belligerent, aggressively masculine, and creepy to look at.
look at what Tesla announced this week as its strategy shifts from producing electric cars to producing robots. Their Optimus universal humanoid robot is a prime example of the physical design shared by most such robots. They may be technically impressive, but most people won’t want to share space with them, let alone invite them into their homes.
When it comes to humanoids, the dialogue is almost always the same. We talk about what they can do—how fast they move, how precise their grasp is, how much work they can take on. We benchmark performance and reliability, then get into arguments about flexibility, payload, and battery life.
What we rarely talk about is how they behave when things don’t go as planned. When the bot freezes in the middle of a conversation or powers down without warning.
As robots begin to move out of labs and warehouses and into hospitals, care facilities, and homes, this omission begins to look less like oversight and more like a structural blind spot. Recent research projects By 2035, the humanoid robot market will reach 8 billion, with annual shipments exceeding 1.4 million units. However, the most critical questions about how these machines will fit into human spaces remain largely unanswered.
For decades, robotics has been about mastering the physics of the world. We invest significant effort in manipulation, motion, and navigation—enabling machines to reliably interact with noisy, changing, and harsh environments. This work is critical. Without it, nothing else matters.
But there has been little equivalent investment in the so-called social operating system of robots: how it interrupts, how it waits, how it resumes, how it expresses uncertainty, how it apologizes, how it listens. These behaviors rarely show up in benchmarks or demos, but they are exactly what determine whether a robot can be trusted once it starts sharing space with humans.
This imbalance is most pronounced in nursing homes and hospitals. In these environments, technical capabilities are table stakes. Two nurses can have the same clinical skills; the one with a better attitude toward patients will be the one that patients seek, trust, and forgive. The same dynamic applies to robots. Strength and precision are important, but they are not what make a system acceptable, safe, or popular.
In addition to skills, the need for compassion and care is essential. 20% of U.S. adults feel lonely and isolated every daythis number is only increasing among older Americans, with 28% of Americans over 65 reporting feeling lonely. As our population ages and caregiver shortages increase, the need for connected care will only grow. This also means that building socially intelligent humanoid robots becomes not only a technical challenge but also a public health imperative.
Ability answers this question: What can this robot do?
The character’s answer is more difficult: What will it choose to do and how?
As robots enter the social space, the most important interfaces are no longer just mechanical or computational ones. This is behavioral. People build trust through systems whose behavior is predictable, respectful, and understandable—especially when things go wrong. Direct-to-consumer humanoid robots such as 1X’s home robot Neo are expected to enter homes to help with daily tasks. Companies are working toward that reality, but when a robot folds laundry incorrectly, suddenly interrupts a conversation, or freezes mid-action, the moment that determines whether it’s trustworthy isn’t the task itself, but how the system responds to the error.
and errors will occur.
Every robot fails. Hardware will fail. Models can be misleading. Time will close. The real world is chaotic, and no system can escape this reality. The question is not whether failure occurs, but what happens next.
Did the robot admit its mistake?
Is the way it apologizes genuine and not scripted?
Does it explain in plain language what went wrong?
Does it ask for feedback, or adjust its behavior in response?
When I conceptualized my first social isolation robot in the early days of Melbourne’s COVID-19 lockdown, I knew I wanted to prioritize approachability and tone first. I didn’t need a robot to do things for me, like fold laundry or make my bed—I needed it to give me a hug, and at that point I hadn’t had a hug in four months.
Now that I am a 25-year-old robot founder, I find that it is not that abilities are not important, but that abilities are not important. Without trust, capabilities will never be utilized. In a chaotic human environment, a robot that politely makes mistakes will outperform a “perfect” robot that doesn’t know when to back off.
People will forgive these limitations if they trust the systems they interact with. They won’t forgive being run over.
Recent research confirms this intuition. 2025 U.S. Consumer Survey The study found that while 65% of people expressed interest in owning advanced home robots, familiarity with robotics remains low, with 85% reporting only some or limited familiarity. Trust does not stem from perfection, but from the robot’s perceived usefulness, sociability, and appropriate behavior during the interaction. The determinant of acceptance is not just technical prowess; The question is whether these machines can navigate the social contract of shared space.
We already know how to build machines that can move. We’ve just begun building machines that know how to act appropriately.
If humanoid robots are to win a place in the social space, they will need more than just capabilities. They need character. Not as an aesthetic layer or a scripted personality, but as a core design principle—as carefully engineered as a motor, sensor, or control loop.
The robots that succeed in this decade will be the ones that are most socially acceptable, not those that can Do most.
The views expressed in Fortune opinion pieces are solely those of the author and do not necessarily reflect the views and beliefs of:wealth.
This story was originally published on wealth network

