Connect with us

Hi, what are you looking for?

HorizonLifeTime.comHorizonLifeTime.com

Tech News

OpenAI is launching an ‘independent’ safety board that can stop its model releases

Vector illustration of the ChatGPT logo.
Image: The Verge

OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”

The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says….

Continue reading…







    Get the daily email that makes reading the news actually enjoyable. Stay informed and entertained, for free.




    Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

    You May Also Like

    Investing

    Quickly coordinate e-business applications through revolutionary catalysts for change. Seamlessly underwhelm optimal testing procedures processes.

    Investing

    Collaboratively administrate turnkey channels whereas virtual e-tailers. Objectively seize scalable metrics whereas proactive e-services.

    World News

    Post Content

    Editor's Pick

    Gene Healy I have no end of uncharitable thoughts about recent American presidents; yet, when I’m cataloging their sins, the words “undue caution” have...