The theme of the Davos Forum in January 2024 is artificial intelligence.
AI is being peddled; sovereign states are touting their AI infrastructure; intergovernmental organizations are deliberating the regulatory implications of AI; business executives are hyping the promise of AI; and political titans are debating The national security implications of artificial intelligence; almost everyone you meet on Main Street gushes about artificial intelligence.
However, there is a trace of hesitation hidden in my heart: Is this true? Here are 10 things you should know about artificial intelligence — the good, the bad and the ugly — compiled from some of the talks I gave in Davos last month.
The accurate term is "generative" artificial intelligence. What is "generate"? While previous waves of AI innovation were based on learning patterns from data sets and being able to recognize those patterns when classifying new input data, this wave of innovation is based on learning of large models (aka “pattern ensembles”) and Ability to use these models to creatively generate text, video, audio, and other content.
No, generative AI is not an illusion. When large, previously trained models are asked to create content, they do not always contain fully complete patterns to guide generation; in those cases where the learned patterns are only partially formed, the model has no choice but to "fill in the blanks," thereby leading to the so-called hallucinations we observe.
As some of you may have observed, the output generated is not necessarily repeatable. Why? Because generating new content from partially learned patterns involves a certain amount of randomness and is essentially a stochastic activity, this is a fancy way of saying that the output of generative AI is not deterministic.
The non-deterministic generation of content actually lays the foundation for the core value proposition of generative artificial intelligence applications. The sweet spot for use lies in use cases that involve creativity; if creativity is not required or required, then the scenario is likely not suitable for generative AI. Use this as a touchstone.
Minimal creativity provides very high accuracy; the use of generative artificial intelligence in the field of software development to generate code for developers to use is a good example. Creativity at scale forces a generative AI model to fill very large gaps; this is why you tend to see incorrect citations when you ask it to write a research paper.
Generally speaking, the metaphor of generative artificial intelligence is the Oracle of Delphi. Oracles are ambiguous; similarly, the outputs of generative AI are not necessarily verifiable. Ask questions about generative AI; don’t delegate transactional operations to generative AI. In fact, this metaphor extends far beyond generative AI to all artificial intelligence.
Paradoxically, generative AI models can play a very important role in science and engineering, even though these are not typically associated with artistic creativity. The key here is to pair the generative AI model with one or more external validators that filter the model output, and have the model use these validated outputs as new prompt input for subsequent cycles of creativity until the combined system produces the desired the result of.
The widespread use of generative AI in the workplace will lead to the great divide of the modern era; those who use generative AI to exponentially increase their creativity and output, and Those who abandon their thought processes to generative AI and are gradually marginalized and inevitably furloughed.
Most of the so-called public models are tainted. Any model trained on the public internet has been trained on content at the end of the web, including the dark web. This has serious implications: first, the model may have been trained on illegal content, and second, the model may have been infiltrated with Trojan horse content.
The guardrail concept of generative AI is fatally flawed. As mentioned in the previous point, when a model is contaminated, there are almost always ways to creatively push the model around the so-called guardrails. We need a better method; a safer method; a method that leads to public trust in generated artificial intelligence.
As we witness the uses and abuses of generative AI, we must look inward and remind ourselves that AI is a tool.