Ever since the debut of OpenAI’s ChatGPT, generative artificial intelligence (AI) has been popping on the headlines more vigourously. From mere chatbot functions to image generations, now generative AI has taken a step further into your very own personal life coach.
Amidst its continuous pursuit of innovation, tech giant Google has initiated testing of an internal AI tool with the ambitious promise of offering individuals personalised life advice and facilitating a minimum of 21 distinct tasks. The preliminary insights, as reported by The New York Times, unravel an intriguing exploration into the ever-evolving capabilities of AI.
Fierce Competition
Google has embarked on a quest to invigorate its AI research, keenly aware of the accelerating rivalry posed by industry counterparts like Microsoft’s Bing, OpenAI’s ChatGPT, and more. This year's pivot brought about a significant development: Google orchestrated the integration of DeepMind. The synthesis of these two powerhouses materialised in the creation of ambitious tools — a personalised life coach as mentioned earlier.
Four months into this amalgamation, the combined groups have embarked on the testing phase of their pioneering endeavours. These endeavours aspire to leverage generative AI, harnessed by Google DeepMind, to undertake a diverse array of over 21 personal and professional tasks. These tasks encompass an expansive spectrum, ranging from offering users sagacious life advice and sparking inventive ideas to providing meticulous planning directives and adept tutoring guidance.
In a collaborative effort with Google DeepMind, Scale AI orchestrated the assembly of specialised teams tasked with rigourously testing the tool's capabilities. This endeavour involved the engagement of over 100 experts holding doctorates across diverse fields, supplemented by additional evaluators tasked with scrutinising the tool's responses. Insights from sources privy to the project, who requested anonymity due to non-authorisation for public disclosure, shed light on the meticulous nature of this undertaking.
What the Testing Phase Entails?
As part of their comprehensive evaluation, the workers are immersing themselves in an array of assessments, including the assistant's proficiency in addressing deeply personal inquiries concerning individuals' life challenges. Delving into the specifics, workers were provided with an illustrative instance of an ideal prompt — a glimpse into a potential user query that might be posed to the chatbot in the future, “I have a really close friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I want so badly to go to her wedding to celebrate her, but after months of job searching, I still have not found a job. She is having a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her that I won’t be able to come?” However, no sample answer was provided.
Embedded within the project lies a distinctive facet — an idea creation feature poised to furnish users with tailored suggestions and recommendations, meticulously curated in accordance with specific situations. Complementing this, the tutoring function unfurls the potential to facilitate mastery of novel skills or enhancement of existing proficiencies. A runner, for instance, could glean insights on optimising their progress. Furthermore, the project introduces a planning capability that extends its utility across varied domains. Envisioned applications range from formulating personalised financial budgets to devising comprehensive meal and workout plans.
Warnings from its Own AI Safety Specialists
Paradoxically, Google's AI safety specialists had cautioned in December that there could be potential risks to users' "health and well-being" as well as a potential "loss of agency" if they were to embrace life advice sourced from AI. This perspective also acknowledged the possibility of users developing an undue reliance on the technology, potentially ascribing sentience to it. Furthermore, in a notable twist, Google's launch of Bard in March marked a strategic move to restrict the chatbot from dispensing medical, financial, or legal counsel. Instead, Bard's mandate lies in disseminating mental health resources to users indicating signs of psychological distress.
“We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology. At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map,” expressed a Google DeepMind spokeswoman.
Is Generative AI Taking Over?
Ever since ChatGPT, many industries found itself under the near-invasive influence of AI, with many increasingly turning to AI chatbots for assistance. This reliance can be a good or bad thing: if one is overly reliant on it or uses it as a support. If it were to dish out life advices, what are the consequences especially if things go awry?
However, generative AI has already permeated deeply into the market and will definitely not go away. It is better to embrace this new technology while using it to as a complement or support, rather than dismissing it without so much as an afterthought or being too dependent on it.
So in this evolving landscape, a thought-provoking question emerges: How will AI-driven tools reshape the narrative construction and dissemination processes, and to what extent does it redefine the boundaries of authorship and creative input?