AI: Year-end efforts to advance AI Health & Safety. RTZ #950
Even as companies and governments lean into accelerating the AI Tech Wave at the rate of hundreds of billions a year going forward, the same parties are trying to outline efforts to mitigate the potential negative health impacts of LLM AIs on mainstream users.
This last week of 2025 alone is seeing examples of such AI health and safety actions from the company to the state to the country.
Engadget discusses OpenAI’s efforts in this context in “OpenAI is hiring a new Head of Preparedness to try to predict and mitigate AI’s harms”:
“CEO Sam Altman posted about the role on X, saying the models ‘are starting to present some real challenges.’
“OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company’s safety strategy. It comes at the end of a year that’s seen OpenAI hit with numerous accusations about ChatGPT’s impacts on users’ mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledged that the “potential impact of models on mental health was something we saw a preview of in 2025,” along with other “real challenges” that have arisen alongside models’ capabilities. The Head of Preparedness “is a critical role at an important time,” he said.”
“Per the job listing, the Head of Preparedness (who will make $555K, plus equity), “will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.” It is, according to Altman, “a stressful job and you’ll jump into the deep end pretty much immediately.”
“Over the last couple of years, OpenAI’s safety teams have undergone a lot of changes. The company’s former Head of Preparedness, Aleksander Madry, was reassigned back in July 2024, and Altman said at the time that the role would be taken over by execs Joaquin Quinonero Candela and Lilian Weng. Weng left the company a few months later, and in July 2025, Quinonero Candela announced his move away from the preparedness team to lead recruiting at OpenAI.”
The efforts make corporate and competitive sense, especially given its core rival Anthropic’s focus on AI Safety from its founding when OpenAI’s former founders Dario Amodei and others, split off to build the competing LLM AI company.
This week also saw AI Safety actions at the US State level that’s notable.
Engadget again outlines this in “New York State will require warning labels on social media platforms”:
“The warnings will appear when users interact with a feature the state deems addictive.”
“The State of New York will now require social media platforms to display warning labels similar to those found on cigarettes. The bill was passed by the New York Legislature in June and signed into law by Gov. Kathy Hochul on Friday. It will apply to any platforms that feature infinite scrolling, auto-play, like counts or algorithmic feeds. The labels will caution those on the platform about potential harm to young users’ mental health.”
“Social media companies will be required to display these warning labels when a user first interacts with any of the features the state considers predatory. The warning will also be displayed periodically after that interaction.”
“Keeping New Yorkers safe has been my top priority since taking office, and that includes protecting our kids from the potential harms of social media features that encourage excessive use,” Gov. Hochul said in a statement. The law will apply when any of these platforms are being accessed from New York. Gov. Hochul also signed two bills into law last year aimed at protecting kids from social media.”
“Concerns over the mental health effects of social media platforms on younger users have been mounting and government bodies have been increasingly taking action. A bill similar to the one in New York has been proposed in California. This year Australia became the first nation to ban social media for children, with Denmark soon to follow.”
The third example at the Nation level comes from China, the second largest AI market in the world. Especially given the global AI Space Race context going into next year. Important particularly since China also leads global efforts on AI powered humanoid robots,
Bloomberg explains these efforts in “China Issues Draft Rules to Govern Use of Human-Like AI Systems”:
“China plans to tighten rules around the use of human-like artificial intelligence by requiring providers to ensure their services are ethical, secure and transparent.”
“Users must be informed they are dealing with AI when they log in to a service and at two-hour intervals, or when signs of overdependence can be detected, the country’s cyberspace watchdog said in a statement on its website on Saturday. The proposals are open for public consultation until Jan. 25.”
“AI systems that are designed to act like humans should also implement robust security and ethical review systems, while acting with “core socialist values” and refraining from publishing content that could compromise national security, the Cyberspace Administration of China said.”
All three developments above in this last week of 2025 show that AI Safety goes beyond lip service in these early days of the AI Tech Wave.
The best intentions are there, to balance the best intense AI infrastructure investments being made at each level and more to scale and advance AI into the mainstream next year and beyond. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)