Just last week, the launch of GPT-4o has generated excitement due to its attractive "omni" capabilities, combining real-time text, audio, and visual processing, promising a transformative impact on human-computer interaction and setting a new standard in AI versatility and efficiency.
In a surprising turn of events, OpenAI's Chief Scientist and co-founder, Ilya Sutskever, announced his departure on X.
On the same day, one of the leaders in the “superalignment team”, Jan Leike, also made a similar announcement.
The “superalignment team’, led by both individuals, has recently experienced a string of departures. In response to this trend, the company made a significant decision last Friday: to disband the team that was established a year ago with the explicit purpose of delving into the profound and long-term risks linked to artificial intelligence.
Resignation at OpenAI Raises Safety Concerns
Sutskever announced his departure after disagreements with OpenAI CEO Sam Altman over the prioritisation of product development over safety measures.
Leike, who co-led the disbanded "superalignment team”, echoed these concerns on social media, highlighting struggles to obtain sufficient computing resources and a marginalisation of safety culture within OpenAI.
Leike's criticism specifically focused on OpenAI's pursuit of powerful AI models like GPT-4o, arguing that the company was not adequately preparing for the potential risks associated with such advancements. He emphasised the need for more resources and a stronger safety culture to ensure successful achievement of the goal.
These resignations come amidst a period of turmoil at OpenAI. The company has been criticised for prioritising product releases over safety, with Tesla CEO Elon Musk suggesting that safety is not their top priority.
Additionally, OpenAI has been accused of flooding its chatbot store with spam and scraping data from YouTube in violation of the platform's terms of service.
OpenAI Reassures Public on Safety Measures After Key Departures
OpenAI is facing questions about its commitment to AI safety following the resignations of two key figures who co-led the company's "superalignment team" focused on safety in advanced AI.
In response to these concerns, OpenAI's CEO Sam Altman and President Greg Brockman co-authored a message emphasising their awareness of the risks and potential of AGI.
They pointed to their advocacy for international AGI safety standards and their pioneering work in examining AI systems for potential catastrophic threats.
Altman and Brockman also highlighted ongoing efforts to ensure the safe deployment of increasingly advanced AI systems.
They cited the development and release of GPT-4, a large language model, as an example where safety measures were implemented throughout the process. They further noted ongoing efforts to improve model behaviour and abuse monitoring based on lessons learned.
While the departures raise concerns, OpenAI maintains it has a broader safety strategy beyond the disbanded “superalignment team”.
The company reportedly has AI safety specialists embedded across various teams and dedicated safety teams, including a preparedness group focused on mitigating potential catastrophic risks from AI systems.
Additionally, Altman has publicly voiced support for the creation of an international agency to oversee AI development, acknowledging the potential for "significant global harm" if not properly managed.
OpenAI Appointed New Chief Scientist
Jakub Pachocki has been appointed as the new chief scientist at OpenAI, replacing Ilya Sutskever. Pachocki has been instrumental in the development of GPT-4, OpenAI Five, and other key projects.
OpenAI CEO Sam Altman praised Pachocki's leadership and expertise, expressing confidence in his ability to guide the company towards safe and beneficial artificial general intelligence (AGI).
This announcement comes amidst recent turmoil at OpenAI, where Altman was temporarily removed from his position due to a lack of transparency.
Sutskever played a key role in both ousting and then advocating for Altman's return, leading to speculation about Sutskever's potential knowledge of undisclosed information.
Security Flaws Found in ChatGPT Plugins
Back in November 2023, researchers discovered serious security vulnerabilities in third-party plugins for ChatGPT, launched by OpenAI. These flaws could allow hackers to steal user data and take control of online accounts.
The first vulnerability impacted the installation process, enabling hackers to install malicious plugins without a user's knowledge and steal sensitive information like passwords from private messages.
The second vulnerability affected PluginLab, a platform for creating custom ChatGPT plugins. Hackers could exploit this flaw to gain control of user accounts on third-party platforms like GitHub.
The third vulnerability involved OAuth redirection manipulation, allowing attackers to steal user credentials through multiple plugins.
These vulnerabilities were discovered throughout 2023. The first one was found in June and reported to OpenAI in July. In September, vulnerabilities were identified in PluginLab.AI and KesemAI plugins and reported to the respective vendors. All identified vulnerabilities have since been patched.
Musk Sues OpenAI, Alleging Broken Promises and a Betrayal of Humanity
Elon Musk, the outspoken CEO of Tesla and SpaceX, launched a legal battle against OpenAI and its CEO Sam Altman in March 2024.
The lawsuit centres on a fundamental disagreement about the future of OpenAI, a research lab initially established with the ambitious goal of developing artificial intelligence (AI) for the benefit of humanity.
Musk, a co-founder of OpenAI in 2015, claims he reached an agreement with Altman and other leaders on the non-profit structure of the organisation.
This agreement, according to the lawsuit filed in San Francisco, was based on verbal commitments made during the company's inception. However, legal experts cast doubt on the enforceability of such an agreement, given the lack of a formal written contract.
The crux of the dispute lies in OpenAI's recent shift towards a for-profit model.
In 2019, they established a for-profit arm and released their most powerful chatbot to date, ChatGPT-4, under an exclusive licensing deal with Microsoft.
Musk views these actions as a betrayal of the founding mission and a shift towards prioritising profits over the well-being of humanity.
The lawsuit alleges that OpenAI has strayed far from its original non-profit path and become a "de facto subsidiary" of Microsoft.
Musk argues that Altman and OpenAI President Greg Brockman reaffirmed their commitment to the non-profit model through written messages exchanged in the years following the company's founding.
One such message, included in the lawsuit, shows Altman expressing his enthusiasm for the non-profit structure in 2017.
Apple Collaborating with OpenAI, Faces Security and Privacy Concerns
Apple's upcoming iOS 18 update is set to introduce significant artificial intelligence (AI) features, and a recent deal with OpenAI, the creator of ChatGPT, raises questions about security and privacy for iPhone users.
While details are still emerging, reports suggest Apple will leverage OpenAI's technology for chatbot functionality, sparking debate about potential risks.
At the core of Apple's approach to AI lies a commitment to user privacy. Their motto, "Privacy. That's iPhone," reflects a focus on on-device processing, where data is analysed directly on the user's phone rather than being sent to external servers.
This approach minimises the risk of data breaches and unauthorised access.
However, on-device processing presents limitations. Training effective AI models requires vast amounts of data, and local phone storage can be a bottleneck. This is where the partnership with OpenAI comes in.
By outsourcing chatbot development to OpenAI, Apple potentially bypasses the data storage hurdle while offering a desired user feature.
Security concerns arise when considering the transfer of data between Apple devices and OpenAI servers.
The specifics of this data exchange remain unclear. If user data is sent to OpenAI for chatbot training, it raises questions about how securely it's handled and whether it adheres to Apple's strict privacy standards.
This approach mirrors Apple's existing partnership with Google for search functionality. Here, a multi-billion dollar deal ensures a baseline level of data privacy, but some user information is undoubtedly transferred to Google's servers.
The extent of potential security risks hinges on the details of the Apple-OpenAI collaboration. Will user data be anonymised before reaching OpenAI? How long will it be retained? These are crucial questions that need to be addressed to ensure user privacy is not compromised.
Another layer of complexity arises with on-device processing limitations.
While Apple is developing powerful AI chips for its latest devices, older iPhones may not be able to handle the demands of complex AI tasks.
This could create a scenario where some users benefit from advanced AI features on iOS 18, while others are left behind due to hardware constraints.
Ultimately, Apple's AI strategy in iOS 18 walks a tightrope between user privacy, security, and functionality. While on-device processing offers a secure environment, it may limit functionality for some users.
The Complexity of AI Safety and Alignment
The concept of the "impossible triangle of AI" posits that efficiency, safety, and benefits cannot all be fully optimised simultaneously. In OpenAI's latest developments, it appears that safety has been deprioritised in favour of maximising efficiency and user benefits.
Privacy advocates and regulators are criticising OpenAI for its AI chatbot ChatGPT generating inaccurate information about people.
A privacy nonprofit in Austria called Noyb, founded by Max Schrems, filed a complaint against OpenAI alleging that ChatGPT violated the EU's General Data Protection Regulation (GDPR) by making up personal details about people.
Schrems himself experienced this when ChatGPT gave him an incorrect birthdate and OpenAI said they couldn't fix it.
Another complaint was filed against OpenAI last year in Poland, and the Italian data authority also warned them about breaching GDPR.
In the US, the Federal Trade Commission is investigating the potential for reputational harm caused by ChatGPT's hallucinations.
Perspective from Meta’s AI Chief Scientist
Yann LeCun, Meta's Chief AI Scientist and a notable competitor to OpenAI, asserts that the journey toward Artificial General Intelligence (AGI) will not be marked by a single, groundbreaking event.
Instead, it will be characterised by continuous advancements across various domains.
He believes that with each significant AI breakthrough, some people might prematurely declare that AGI has been achieved, only to later refine their definitions and understandings.
LeCun argues that before addressing the urgent need to control AI systems purportedly much smarter than humans, it is crucial to first design systems that surpass even the intelligence of a house cat.
Drawing an analogy to the development of aviation, he suggests that just as it took decades of meticulous engineering to create safe, long-haul jets, achieving and ensuring the safety of highly intelligent AI systems will similarly require many years of progressive improvements.
LeCun emphasises that the current capabilities of AI, such as those seen in large language models (LLMs), should not be confused with true intelligence.
He foresees a gradual evolution where AI systems will become incrementally smarter and safer through iterative refinements over an extended period.
Is the Race for Powerful AI Putting the Brakes on Safety?
The recent turmoil at OpenAI highlights the complex tightrope walk of AI development.
On one side, there's the allure of groundbreaking advancements like GPT-4o's capabilities.
On the other side lies the ever-present need for robust safety measures and ethical considerations.
Striking the right balance will require ongoing collaboration between researchers, policymakers, and the public.
Can we achieve the benefits of AI progress without compromising safety and security?
This is the critical question that will define the future of our relationship with artificial intelligence.