Meta Assembles Tech Titan Advisory Group to Steer AI Domination
Mark Zuckerberg isn't mincing words: Meta wants to be the king of Artificial Intelligence.
To achieve this audacious goal, Meta has assembled a heavyweight AI advisory council comprising tech titans like Stripe CEO Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lütke, and investor Charlie Songhurst.
These industry veterans will offer their expertise on technological advancements, strategic growth, and innovation, guiding Meta's AI and overall technology roadmap. This move highlights Meta's aggressive push into AI.
The company is upping its ante by a cool $5 billion, exceeding initial forecasts for developing next-generation consumer, developer, business, and hardware-focused AI products.
This hefty investment signifies Meta's intent to be a frontrunner in the intensifying AI arms race among tech giants.
The company is aggressively expanding its AI product portfolio, encompassing both hardware (VR headsets, smart glasses) and software (consumer-facing AI assistants).
Zuckerberg acknowledges the need for patience, recognising that these advancements may not yield immediate financial returns.
The $35 billion investment signifies Meta's unwavering commitment to AI. This unprecedented move sets a new benchmark for how tech giants capitalise on this transformative technology.
While Google, Microsoft, and Anthropic remain formidable competitors, Meta's strategic advisory group and substantial financial backing position them as a dominant force in the race to develop the next generation of AI-powered tech products.
While the frequency of the advisory group's meetings remains undisclosed, Zuckerberg has made it clear that their guidance will be instrumental in shaping Meta's technological future.
Lack of Diversity in Meta's AI Advisory Board
However, Meta came under fire recently for establishing an AI advisory board composed entirely of white men.
This lack of diversity has raised concerns about the potential for bias in the development of Meta's AI products.
Women and people of colour have long been calling for greater representation in the field of AI, arguing that their exclusion has led to negative consequences.
For example, research conducted in the 1970s excluded women, resulting in the development of medical treatments that may not be effective for women.
Similarly, a 2019 study by the Georgia Institute of Technology found that self-driving cars are more likely to hit Black people because their sensors struggle to detect Black skin.
AI and the Perpetuation of Bias
AI systems are trained on existing data, which can often be biased. This bias is then reflected in the outputs of the AI system.
For instance, AI-powered voice assistants may have difficulty understanding diverse accents and flag the work of non-native English speakers as AI-generated.
Additionally, facial recognition software has been shown to disproportionately misidentify Black people as criminal suspects.
The lack of diversity in Meta's advisory board is likely to exacerbate these problems. With only white men on the board, there is a significant risk that the perspectives of women and people of colour will not be adequately considered during the development of AI products.
This could lead to AI systems that perpetuate and amplify existing biases.
The Dark Side of AI for Women
Women are particularly vulnerable to the negative effects of AI. A large portion of AI-generated deepfake videos target women, often containing sexually explicit content.
This is a form of harassment and can have a significant impact on the lives of the victims.
One high-profile example involved non-consensual deepfakes of Taylor Swift that went viral on social media. While platforms like X eventually intervened, the incident highlights the vulnerability of women to this type of abuse.
The ease of access to deepfake technology is another concern. Apps that can manipulate photos or swap faces onto pornography are readily available, and there have been reports of middle and high school students using this technology to create deepfakes of their classmates.
These are just a few examples of how AI can be used to harm women. It is crucial to have women represented in the development of AI products to ensure that these risks are mitigated.
AI and the Future of Work
The rapid development of AI has the potential to automate many jobs, particularly those that do not require a four-year college degree. Minority workers are often overrepresented in these types of jobs.
A report by McKinsey suggests that AI could automate roughly half of all jobs that pay over $42,000 annually and do not require a four-year degree.
This raises concerns about the potential for job displacement, particularly among minority groups.
An all-white advisory board at Meta is unlikely to adequately consider the impact of AI on the workforce. Without diverse perspectives, there is a significant risk that AI products will exacerbate existing inequalities.
Flaws in Customer Service and Cybersecurity
Meta, formerly known as Facebook, boasts over 2.8 billion users. While this platform has revolutionised communication, a troubling reality lurks beneath the surface – a disregard for user safety and well-being.
This lack of commitment manifests in two key areas: Meta's inadequate customer service and its cybersecurity vulnerabilities that leave users exposed to hackers and scammers.
Lost in a Web of Automation
Meta's customer service is a labyrinth designed to leave users frustrated and helpless. When users encounter issues like hacking or scams, they are met with a nightmarish maze of automated responses and generic FAQs.
There is no human to speak with, no lifeline to grasp onto in the midst of a digital crisis. This absence of a dedicated support system exacerbates the trauma experienced by victims whose online lives have been upended.
Imagine spending hours, even days, desperately searching for a way to contact someone at Meta who can address your concerns.
The lack of a direct hotline or a responsive email system creates a situation where compromised users are left stranded, unable to recover lost accounts or dispute unauthorised transactions.
Exploiting the Customer Service Void
The void left by Meta's absent customer service creates a breeding ground for a particularly cruel kind of scammer. These predators prey on the vulnerability of victims, offering false hope with fake customer service phone numbers.
Social media and blog posts become hunting grounds where scammers, masquerading as Meta support agents, promise swift resolutions to critical account issues.
These deceptive tactics only worsen the situation, adding insult to injury for users who are already struggling.
The facade of legitimacy is often crafted by circulating fake customer service phone numbers across various online platforms, including professional networks like LinkedIn.
Unsuspecting users, desperate for help, may fall victim to the allure of a supposed lifeline. However, falling for these scams can have dire consequences.
Scammers may attempt to steal sensitive information, such as passwords or personal details, further jeopardising the user's security.
Meta's Cybersecurity Weaknesses is A Playground for Hackers
Meta's platforms, including Facebook, Instagram, and WhatsApp, have become a haven for cybercriminals. Hackers exploit vulnerabilities in Meta's security systems to gain unauthorised access to user accounts, wreaking havoc on individuals' lives and finances.
A common tactic involves posting content that violates Meta's terms of service, prompting the platform to disable the account.
With no recourse for retrieval, users are left in a state of shock, especially when they realise they've lost access to years of memories, connections, and personal data.
The aftermath of such breaches can be devastating.
Financial losses, identity theft, and emotional distress are all too common consequences. The loss goes beyond the monetary; it's a decade of memories, connections, and personal milestones that vanish without a trace.
The emotional toll is immeasurable, as many find themselves isolated from loved ones and communities they once cherished online.
Prioritising User Safety Over Executive Salaries
As users grapple with the consequences of cybercrime on Meta's platforms, the company's CEO enjoys a substantial salary.
Critics argue that while the leadership reaps the benefits of Meta's success, the lack of investment in robust customer support systems is a blatant disregard for the well-being of its users. And now, they are allocating a huge sum into AI, which may worsen the issue.
Mark Zuckerberg's salary, among the highest in the tech industry, raises serious questions about Meta's priorities. While users struggle to reclaim their digital lives, the vast divide between the CEO's earnings and the inadequate support provided becomes a glaring issue.
The solution is clear: Meta must prioritise user safety and well-being.
This requires a two-pronged approach.
First, the company needs to invest in comprehensive customer support systems, not AI. Users deserve more than automated responses and generic FAQs. They deserve access to real people who can help them navigate the complexities of cybersecurity threats.
Second, Meta must strengthen its cybersecurity defences. The company needs to identify and address vulnerabilities in its platforms that make them easy targets for hackers. By prioritising user safety, Meta can begin to rebuild trust and move beyond the troubling reality it has created.
Meta's Lack of Cooperation in Combating Scams on Facebook
Singapore's fight against online scams has exposed a concerning lack of cooperation from Meta, the parent company of Facebook. While the Ministry of Home Affairs (MHA) has partnered with other online platforms to implement safeguards, Meta has consistently pushed back against these recommendations.
This lack of cooperation is particularly troubling considering the prevalence of scams on Meta's platforms.
According to Minister of State for Home Affairs Sun Xueling, Facebook, WhatsApp, and Instagram were responsible for a staggering 43% of all scam cases in Singapore during 2023, resulting in financial losses exceeding $280 million.
Furthermore, Facebook stands out as the sole platform among those reviewed by MHA's E-commerce Marketplace Transaction Safety ratings (TSR) that has not begun implementing any of the suggested safety features.
The contrasting approaches of other online platforms highlight the potential effectiveness of MHA's recommendations. Shopee, for instance, introduced seller verification measures that required users to confirm their identities using government records.
This initiative resulted in a remarkable 71% decrease in e-commerce scams on their platform between 2021 and 2023. Carousell, another online marketplace, collaborated with the Singapore Police Force by co-locating staff within the Anti-Scam Command office.
This collaboration significantly reduced the turnaround time for removing fraudulent online profiles and advertisements, bringing it down from days to mere hours.
In response to Sun's criticism in Parliament, Meta expressed "dismay" over the accusations. Their spokesperson claimed to be actively engaged in discussions with MHA and seriously considering their suggestions.
However, Meta emphasised the industry-wide nature of the scam problem, implying that a single company cannot resolve it independently. They pledged continued cooperation on consumer education campaigns with government partners while promising ongoing improvements to their products and tools to empower users in protecting themselves against scams.
But are they doing anything?
Fake Singapore News Ads Slip Through Meta's Cracks
Social media users in Singapore have been encountering misleading advertisements disguised as news articles on Facebook. These fabricated stories, often featuring local celebrities like JJ Lin and former Prime Minister Goh Chok Tong, use attention-grabbing headlines to target Singaporeans.
Despite users reporting these "sponsored" ads, Meta, the company that owns Facebook, has not taken action against them.
According to the comments section of a YouTube video covering this news, Meta has responded to reports by claiming the ads do not violate their advertising standards, leaving many users frustrated with the platform's response to what appears to be blatant misinformation.
Can an Unsecured House Hold a Crown Jewel?
The development of safe and inclusive AI requires a comprehensive approach that considers the needs of all users. This includes having a diverse range of voices represented at the research and development stages.
Meta's advisory board falls short in this regard. With its lack of diversity, the board is unlikely to be able to advise on the development of AI products that are truly inclusive.
There is a need for a more representative approach to AI development in order to ensure that everyone benefits from this technology.
At the same time, Meta's ambitious plan for AI dominance hinges on a critical but overlooked aspect: user safety.
While building a diverse advisory board is a crucial step for inclusive AI development, it seems Meta has a more immediate challenge – securing its platforms from the threats that already plague them.
Can a company struggling to safeguard user data and combat online scams effectively lead the charge in a responsible and equitable AI revolution?
This is a question Meta must answer before its AI aspirations can truly take shape.