This post is the first of a 3-post series (there may be more actually, but for the time being I have only planned 3!) on the changes, positive and negative, that ChatGPT will bring to our society in general and to the world of work in particular. I am indeed convinced that, since officially releasing the 4th version of its Large Language Model (LLM) on 13th March, OpenAI has set in motion the beginning of a systemic revolution. Three days later, Microsoft introduced Microsoft 365 Copilot, which made the progress even more tangible – I suggest you watch the launch event video to make your own opinion; personally I was sold.
For almost half a century, the development of the Internet and the underpinning data content have gone through several phases, enabled by progress in infrastructure technology (hardware and software). Since the beginning of this century, huge improvements in computing power and data transmission speed (including wireless) have led to the creation of an unprecedented amount of data – more than 300 terabytes of data on a daily basis, according to some estimates. Internet users’ #1 problem has shifted over time: historically generating data, then transmitting it and now sorting it and assessing its quality. In that respect, ChatGPT, if properly used (that is an important caveat), can speed up the process in a meaningful way by summarizing the main viewpoints in response to a user prompt, either text or image. Microsoft Copilot uses this feature to produce a range of topic-specific outputs (summaries, presentations etc.) by browsing through your documents and emails, which you can then tweak at your convenience, saving time in the process.
In practice, though, the tool still needs to be used with caution. Key warnings issued so far include:
- The current tool’s inability to source its assertions, making it complicated for users to assess whether a claim is reliable or not. This problem also has copyright implications. How can the beneficiaries be compensated if they cannot be identified? And, at the same time, to what extent should the creation of ChatGPT be financially rewarded?
- The lack of source information is all the more disturbing as this kind of ‘generative’ model is prone to “hallucinations“, i.e. generating ex nihilo information. This is why all experts on the topic of LLMs consider that these models are above all aids to creation and not independent contributors. In his book Impromptu (‘co-authored’ with ChatGPT4 and freely available in pdf format), Reid Hoffmann, co-founder of LinkedIn, talks about ChatGPT4 as ‘AHA’, ‘Amplifying Human Abilities‘, but not replacing them.
More worryingly, ChatGPT and its competitors have drastically lowered the cost of producing standardized decent-quality content (I truly believe ChatGPT is still far from winning the Pullitzer Prize). I see here two main sources of risks. First, the overreliance on LLMs can harm creativity and/or freedom of judgment by systematically bringing forward the same type of information. Second, it is becoming easier than never before to create and spread fake news and content at an industrial scale. A malicious user could ask ChatGPT (or one of its competitors) to generate, for example, multiple versions of fake news reports in dozens of languages almost instantaneously. And possibly to accompany it with some illustrations (also false), which will not fail to mark even more the spirits, with diplomatic, political, social and economic consequences which seem to me even today hardly imaginable.
We will have to get used to having to constantly check whether the information we receive is true or not, probably with the help of trusted entities, public or private – what form these entities might take is well beyond the scope of this article.
In the next post, I will come back to more pragmatic considerations, discussing the implications for the demand for (human) skills and the job market equilibrium. Should we, like Goldman Sachs, worry that AI could eventually ‘replace’ 300 million jobs?