Jensen Huang, CEO of Nvidia, recently declared that we're entering the age of "physical AI," where artificial intelligence transcends language models and manifests in physically capable machines. This vision, amplified by demonstrations of humanoid robots performing tasks like tidying up and assembling vehicles, suggests a shift from traditional robotic arms to systems that emulate human cognition, learning, and adaptability. However, a critical issue remains: the conspicuous absence of transparency regarding the human labor underpinning the development and operation of these seemingly autonomous robots. The demonstrations we see, while impressive, often mask the significant human involvement required to train and guide these machines. This lack of clarity leads to two major problems. First, the public may overestimate the capabilities of current robotic technology, believing them to be far more independent and intelligent than they truly are. Second, it obscures the emergence of new, often unconventional, forms of labor surrounding the AI and robotics industry. The reality is that many of these robots aren't simply "learning" in the way humans do. They often rely on extensive datasets curated and labeled by humans. They might also require remote human operators to guide them through complex or unforeseen situations. The work of these individuals – the data labelers, the remote pilots, the trainers – is often invisible, creating a distorted picture of robotic autonomy. This lack of transparency raises important ethical questions. Who is responsible when a robot makes a mistake? How are the rights and working conditions of the human workers who support these robots protected? As humanoid robots become increasingly prevalent, it's crucial to shed light on the human element behind their operation. Understanding the division of labor between humans and machines is essential for responsible innovation and a realistic assessment of the technology's potential. Failing to do so could lead to unrealistic expectations, ethical oversights, and the exploitation of the very human workers who make these advancements possible. Ultimately, a more transparent approach is needed. We need to acknowledge and understand the role of human labor in the age of physical AI to foster a more informed and responsible development trajectory. Only then can we truly assess the potential benefits and address the challenges posed by this rapidly evolving field.