Open AI: Curious case of governance structure
by: Senen L. Matoto, FICD
ICD Trustee
Institute of Corporate Directors
Unless you have been living under a rock or been marooned on an island with no internet since November 2022, when we woke up to the realities of the new world of artificial general intelligence, then surely Chat GPT, which stands for Generative Pretrained Transformer, must by now be as familiar to you as Netflix.
Chat GPT is, of course, that pioneering technological wonder created by Open AI that answers all questions or requests you may ask of it, and a lot more in the future.
Dubbed as a testament to man’s creative genius and a potential boon to the myriad concerns of humanity, in the same breath, it is also branded as a bane and a precursor of an existential threat to humankind that man should fear.
Why is this so? Chat GPT started in November 2022 as text-only that could only respond to questions based on preprogrammed (i.e., pre-trained) data input as of September 2021. In other words, responses would not be able to capture more current data, much less respond to a future event.
Worse, not having the appropriate data could make up facts or “hallucinate,” throwing off the inquirer. Notwithstanding this limitation in the software, thousands of subscribers signed up within five days of launch. Current GPT versions, however, only taking less than a year to develop, now have pre-trained data as of April 2023; are able to search photos, scan documents on the web for more recent developments in a particular field, and respond in spoken word. Chat GPT has proven to be a big hit and now has over 100 million weekly active users, as announced by Open AI in its recent DevDay event a few weeks ago. The ultimate objective of AI proponents is to create machines with human-like intellect that can make decisions, solve problems, hear, speak, even feel and have sentiments. With AI’s superhuman capabilities and man’s likely dependency on it, visions of a dangerous world of machines becoming the overlords of man have prompted a clamor from non-profit-oriented sociologists and philanthropists for restraint. And, logically, it should be the same creators best positioned to rein in the rapid advancement of the curiously structured hybrid of not-for-profit and capped-profit organization led by Open AI’s computer head geek and founder Sam Altman.
Frankly, in my years of involvement with corporate governance for both profit and non-profit organizations, Open AI’s hybrid structure is a first for me. It is usually either one or the other, not both combined. Founded initially solely as a non-profit in 2018 with a $1-billion contribution from tech entrepreneurs and geeks primarily for the research, development, and advancement of artificial intelligence for all of humanity to benefit from and not just for any single enterprise, after two years, the realities of the need for massive capital to sustain the dash to advance AI set in on the founders.
A for-profit subsidiary, Open AI Global LLC, was created to accommodate investors looking for returns, albeit with a limitation or a cap on their share of the profits to not more than 100 times any investor’s investment. This new entity enabled various profit-oriented investors, such as Microsoft, to provide funding of $10 billion for Open AI’s research. The unusual twist, though, is that these investors would not be entitled to vote or have any say in the board’s composition, which meant they would not have representation on the board nor a say in its strategic directions. Thus, Open AI shall continue to be guided by its altruistic mission and remain under the control of the non-profit-oriented directors.
This brings us to the recent brouhaha at Open AI. Altman was fired by the board for supposedly misrepresenting or inaccurately communicating critical information that the board should have been aware of. The details have not been publicly disclosed, but it is safe to assume that Altman was proceeding a lot more quickly on enhancing the Chat GPT software to the board’s discomfort, which presumably preferred a more studied and deliberate progress ensuring that the existential risks are properly controlled and managed.
The subsequent clamor, however, of the employees and investors for Altman’s reinstatement, fueled undoubtedly by the immediate offer of Microsoft to hire Altman and his colleagues, resulted in the highly unusual upending of the board’s composition as demanded by Altman as a condition for his return.
All I can say is — wow! This will be a great case study for the governance students to ponder and discuss as it throws into disarray the conventional norms of good corporate governance that adhere to the board’s ultimate authority and oversight over management. If you were a director of this company, what would you have done?
Until next week… OBF!
Disclaimer:
On December 5, 2023, “Open AI: Curious case of governance structure was published at the Daily Tribune.
It was authored by Senen “Bing” L. Matoto, a Trustee of the Institute of Corporate Directors’. You can read more about this article through this https://tribune.net.ph/2023/12/open-ai-curious-case-of-governance-structure/
Comments