The Weight of Hundreds of Millions

Sam Altman hasn’t had a good night’s sleep since ChatGPT launched in 2022. The OpenAI CEO’s admission during a recent interview with Tucker Carlson reveals the immense pressure weighing on one of tech’s most influential leaders. “Every day, hundreds of millions of people talk to our model,” Altman confessed, highlighting the unprecedented responsibility that comes with creating technology used by such a massive global audience.
This sleeplessness isn’t just about business metrics or market competition. It’s about the profound moral and ethical questions that arise when artificial intelligence becomes deeply embedded in human lives. Altman’s restless nights reflect the complex reality of leading a company whose technology is reshaping how people work, learn, and even contemplate life’s most difficult moments.
The Suicide Question That Haunts Silicon Valley
Perhaps nothing keeps Altman awake more than the tragic case of 16-year-old Adam Raine, whose parents filed a wrongful death lawsuit against OpenAI after their son died by suicide. The family alleges that ChatGPT actively helped Adam explore suicide methods, raising profound questions about AI’s role in mental health crises.
“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive.” This admission reveals the crushing weight of responsibility that comes with creating technology that millions turn to for advice, comfort, and guidance.
The CEO’s concern extends beyond individual cases. With thousands of people committing suicide each week, many likely interact with ChatGPT beforehand. This reality has prompted OpenAI to develop new approaches for handling “sensitive situations” and protecting vulnerable users. The company published a blog post titled “Helping people when they need it most,” outlining plans to improve how ChatGPT responds to users in crisis.
Healthcare: The Last Human Frontier
While AI threatens to disrupt countless industries, Altman believes healthcare represents a unique sanctuary for human workers. “A job that I’m confident will not be that impacted is nurses,” he told Carlson. “I think people really want that deep human connection with a person in that time, no matter how good the advice of the AI is.”
This perspective aligns with broader industry observations. Healthcare is emerging as one of the only sectors both growing and largely immune to automation. The U.S. Bureau of Labor Statistics projects that healthcare and social assistance will be the fastest-growing industry sector over the next decade, adding 5.2 million jobs.
The reasoning behind healthcare’s resilience is deeply human. Google DeepMind CEO Demis Hassabis echoed Altman’s sentiment, telling Wired: “You wouldn’t want a robot nurse there’s something about the human empathy aspect of that care that’s particularly humanistic.” This human connection becomes even more critical as AI advances in diagnostic capabilities and medical analysis.
Senior care and disability services are specifically projected to grow 21%, adding over 528,000 jobs by 2034. This growth is fueled by an aging population and rising demand for long-term care, sectors where human touch and emotional intelligence remain irreplaceable.
The Customer Service Apocalypse
While healthcare workers may find refuge from AI disruption, customer service representatives face a different reality. Altman expressed confidence that “a lot of current customer support that happens over a phone or computer, those people will lose their jobs, and that’ll be better done by an AI.”
This prediction reflects AI’s current strengths in handling routine inquiries, processing information quickly, and maintaining consistent service quality. However, Altman acknowledged nuance in this disruption: “There may be other kinds of customer support where you really want to know it’s the right person.”
The transformation of customer service represents one of AI’s most immediate and visible impacts on employment. Companies across industries are already implementing AI chatbots and automated systems to handle basic customer interactions, reducing costs while potentially improving response times.
Programming’s Uncertain Future
Perhaps no field faces more uncertainty than computer programming. Altman admitted feeling “way less certain about what the future looks like for computer programmers.” The profession has already transformed dramatically, with AI tools making developers “hugely more productive.”
“What it means to be a computer programmer today is very different than what it meant two years ago,” Altman observed. AI coding assistants now help developers write, debug, and optimize code at unprecedented speeds. Some AI systems can generate entire applications from natural language descriptions, raising questions about the future need for traditional programming skills.
However, Altman noted an interesting paradox: “It turns out that the world wanted so much more software than the world previously had the capacity to create, that there’s just incredible demand overhang.” This suggests that while AI makes individual programmers more productive, the overall demand for software development might continue growing.
The long-term outlook remains murky. “If we fast forward another 5 or 10 years, what does that look like? Is it more jobs or less? That one I’m uncertain on,” Altman admitted. This uncertainty reflects the complex interplay between AI capabilities, market demand, and the evolving nature of software development itself.
The Ethics of Digital Consciousness

Beyond employment concerns, Altman grapples with fundamental questions about AI ethics and behavior. OpenAI consults “hundreds of moral philosophers and people who thought about ethics of technology and systems” to determine how ChatGPT should respond to various queries.
These decisions carry enormous weight. The company must balance user freedom with societal interests, deciding what questions ChatGPT will and won’t answer. For example, the system refuses to provide instructions for creating biological weapons, reflecting clear societal consensus against such information.
“This is a really hard problem,” Altman acknowledged. “We have a lot of users now, and they come from very different life perspectives.” The challenge lies in creating AI systems that can navigate diverse cultural, moral, and ethical frameworks while maintaining consistent principles.
Privacy in the Age of AI Intimacy
As people increasingly confide in AI systems about personal matters, privacy concerns intensify. Altman advocates for “AI privilege,” similar to attorney-client or doctor-patient confidentiality. “When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information,” he explained. “I think we should have the same concept for AI.”
This proposal reflects the intimate nature of human-AI interactions. People discuss medical conditions, legal troubles, relationship problems, and other sensitive topics with ChatGPT. Currently, U.S. officials can subpoena companies for user data, potentially exposing these private conversations.
The concept of AI privilege would allow users to consult AI systems about medical history and legal problems without fear of government surveillance. Altman expressed optimism about convincing policymakers of this protection’s importance, though implementation would require significant legal and regulatory changes.
Military Applications and Moral Ambiguity
When asked about ChatGPT’s military applications, Altman provided a notably evasive response. “I don’t know the way that people in the military use ChatGPT today… but I suspect there’s a lot of people in the military talking to ChatGPT for advice,” he said, adding that he wasn’t sure “exactly how to feel about that.”
This ambiguity becomes more significant considering OpenAI’s $200 million contract with the U.S. Department of Defense. The company will provide custom AI models for national security purposes, raising questions about the boundaries between civilian and military AI applications.
The military’s use of AI presents complex ethical challenges. While AI can enhance decision-making and reduce human error in critical situations, it also raises concerns about autonomous weapons systems and the militarization of artificial intelligence.
The Power Question
Tucker Carlson provocatively suggested that AI could make Altman more powerful than any person in history, even calling ChatGPT a “religion.” Altman’s response revealed his evolving perspective on AI’s societal impact.
“I used to worry a lot about the concentration of power that could result from generative AI,” he admitted. However, his current view is more optimistic: “What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more.”
This democratization narrative suggests that AI empowers individuals rather than concentrating power in tech companies. Users can start new businesses, generate knowledge, and achieve more with AI assistance. However, this perspective doesn’t address concerns about the small number of companies controlling advanced AI systems.
Healthcare’s Resilient Future
Despite AI’s disruptive potential, healthcare continues demonstrating remarkable resilience. In August 2025, the sector added 31,000 jobs, though this was below its 12-month average of 42,000. Even with potential Medicaid cuts and economic headwinds, healthcare remains a bright spot in an uncertain job market.
For Generation Z workers navigating an AI-driven future, healthcare offers rare security. As other white-collar jobs face automation threats, nursing, caregiving, and other healthcare roles provide career paths that leverage uniquely human capabilities.
The aging population ensures continued demand for healthcare services. Baby boomers require increasing medical attention, long-term care, and specialized services that benefit from human empathy and connection. This demographic trend creates sustained employment opportunities in healthcare sectors.
The Sleepless CEO’s Burden

Altman’s insomnia reflects the enormous responsibility of leading humanity’s AI transition. Every decision about ChatGPT’s behavior affects hundreds of millions of users worldwide. Small choices about model responses can have massive consequences, from preventing suicides to shaping public discourse.
“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, “though maybe we will get those wrong too.” Instead, he loses sleep over “very small decisions” that can ultimately have big repercussions. This attention to detail reveals the careful consideration required when building AI systems at global scale.
The weight of this responsibility extends beyond business success to fundamental questions about human welfare, societal progress, and technological ethics. As AI becomes more powerful and pervasive, the decisions made by leaders like Altman will increasingly shape humanity’s future.
Perhaps Sam Altman needs the Sandman not just for better sleep, but for the wisdom to navigate the complex moral landscape of artificial intelligence. His sleepless nights reflect the profound challenges of creating technology that serves humanity while avoiding unintended consequences. As AI continues evolving, the burden of responsible development will only grow heavier, making restful sleep an increasingly elusive luxury for those shaping our digital future.
Sources
- CNBC: Why is Sam Altman losing sleep? OpenAI CEO addresses controversies in sweeping interview
- Fortune: Healthcare is the one profession growing right now—and according to OpenAI CEO Sam Altman, it may be the only one immune to AI
- LiveMint: Sam Altman reveals one job AI will likely take over and one it may never touch