Sat. Oct 5th, 2024

In recent days, I have been attempting to take stock of my 2 months (which seemed much longer!) of using the GPT-4 engine. From this reflection, I have drawn 10 ‘beliefs’, which I classify into two categories: those in the short term, which primarily concern the circumstances under which ChatGPT is most effective, and those in the long term, which discuss the future role of human intelligence in its relationship with the machine.

In the short term: How to make the most of ChatGPT?

Belief #1: ChatGPT Primarily Excels in English

Having utilized ChatGPT in English and French, I’ve occasionally perceived that the AI’s French responses may have been partially constructed via reciprocal translation between English and French. This led to creative writing outcomes that resembled poorly dubbed infomercials, although without the infamous lip-sync blunders.

Belief #2: ChatGPT Necessitates Precise and Detailed Instructions

When we engage in dialogue with others, our shared commonalities and experiences aid in simplifying the process. Conversely, ChatGPT begins each interaction from a fresh slate, devoid of any preconceptions or prior information. Providing this AI with comprehensive, explicit guidelines, particularly in initial prompts, drastically improves the accuracy of the output generated. Initially, I found this requirement time-consuming but soon realized that such an investment bore fruit in ensuring desired outcomes.

Belief #3: ChatGPT Possesses Rapid Reading and Synthesis Capability

Presently, one of the most compelling functionalities of ChatGPT is its capacity to swiftly process and synthesize vast textual content with remarkable accuracy, exceeding conventional tools such as Microsoft Word’s automatic summarizer. A colleague was particularly impressed by ChatGPT’s potential to create ‘meta-summaries’ to address text length limitations. With the right calibration, ChatGPT can assist professionals like lawyers or investors in dissecting complex contracts and drawing attention to key provisions. However, it is prudent to note that fact-checking remains a necessity.

Belief #4: ChatGPT Excels in Literary Queries over Scientific Ones

Scientific inquiries necessitate precise, correct responses, leaving minimal room for error. In contrast, literary questions, unless they’re fact-dependent, open up more room for interpretation and discussion. This distinction makes it relatively simpler to identify inaccuracies or approximations in ChatGPT’s responses to scientific queries. Nonetheless, as previously indicated, ChatGPT excels at maintaining a credible tone even in the most preposterous assertions. Consequently, I recommend setting aside ample time for rigorous verification.

Belief #5: Follow-Up Questions Offer Significant Value

Absent preparatory learning data, ChatGPT lacks any contextual understanding when responding to an initial prompt. Instead of overloading it with exhaustive background details, I’ve found it more efficient to offer essential information, enable the AI to draft an initial version, and then iterate upon that. This iterative process should remain concise and concentrated on vital aspects. ChatGPT efficiently delivers an 80% accurate output, although completing the remaining 20% might prove challenging.

Belief #6: ChatGPT Exhibits Some Form of ‘Creativity

ChatGPT, akin to all large language models, processes enormous data volumes to discern word relationships. Consequently, it utilizes this learned knowledge to generate text that coherently and naturally continues from a given input. This process allows ChatGPT to produce truly unique sentences by stringing words together in novel ways. The surprise element is that ChatGPT is also capable of crafting entirely original narratives where not just the sentences but the entire plotline is spontaneously conceived.

This randomness becomes starkly apparent when ChatGPT attempts the same creative question multiple times. For sufficiently open-ended queries, the resulting answers can vary widely, offering different perspectives that users can combine manually or by integrating one instance’s idea with another. Although these outputs aren’t flawless, they save substantial time in producing an initial draft or generating ideas. Very concretely, I was able to create a genuinely new tennis-based card game for my son in less than 30 minutes, combining ideas taken from 3 separate and robust streams of thoughts.

Belief #7: Vigilant Checking of ChatGPT Outputs is Essential to Counter Inaccuracies, Inconsistencies, and Fabrications

In an experiment, I requested ChatGPT to rephrase my latest blog post about platform economies. The restructured text flowed well and incorporated pertinent concepts, which could misleadingly suggest intellectual soundness to the reader. A detailed examination, however, revealed certain flawed causal connections, potentially mirroring the varying quality of information available on the internet.

Moreover, as the dialogue extends, ChatGPT increasingly tends to fabricate inaccurate information to compensate for its knowledge gaps and/or disregard previous prompts. This trend gradually diminishes the value of each subsequent iteration until it becomes detrimental. Surprisingly, I noted how readily ChatGPT concedes its errors, indicating their not-so-rare occurrence.

In the long term: How can humans cohabit with the machine?

Belief #8: Formulating Accurate Queries and Evaluating Results Will Become Essential Competencies

This one is actually my strongest belief.

As per the universal maxim for computer models, ‘Garbage In, Garbage Out,’ ChatGPT is not exempt. Generally, the quality of ChatGPT’s outputs is significantly influenced by the quality of the prompts it receives. Until recent times, human-machine interaction was limited to strictly standardized coding languages. This interaction was binary – either the correct instructions were inputted, or they weren’t. ChatGPT, by utilizing natural language, has reduced the barrier to entry but has also opened the possibility for poorly articulated prompts, leading to erroneous or irrelevant outcomes. This has rendered the task both more approachable yet more complex – a real skill to be acquired, where human intuition still holds value.

Consequently, assessing the quality and accuracy of AI responses, the ability to critique and enrich it with personal insights, and formulating appropriate follow-up questions will increasingly be desirable skills in this new landscape.

Belief #9: The Necessity for Subject Matter Experts (SMEs) Will Persist, But ChatGPT Will Enhance Productivity and Potentially Diminish Demand

I predict that ChatGPT, by accelerating initial stages of work production, will enable SMEs to focus more on reviewing and editing content, rather than crafting it entirely themselves. Effectively, experts will primarily ensure the accuracy of outputs and will likely be able to cover a broader scope within a similar timeframe.

Since ChatGPT derives its knowledge base from publicly accessible internet content, it is in everyone’s best interest to publish only accurate information vetted by SMEs. This ensures the prevention of a gradual deviation from truth over time (i.e., ChatGPT absorbs inaccurate data, recycles and potentially amplifies it, thereby reinforcing an erroneous learning cycle).

Therefore, while I believe SMEs will continue to be crucial, the demand for specialist knowledge may gradually decline.

Belief #10: My Enthusiasm Mirrors My Apprehension – What Will Future Iterations of ChatGPT Entail?

Having experimented with both ChatGPT 3.5 and ChatGPT 4.0, I am astounded by the rapid pace of technological advancement. ChatGPT 3.5, with its occasionally awkward sentences and numerous loopholes (including ethical ones), has seen remarkable improvements in ChatGPT 4.0 within a few months. Currently, it is often challenging to differentiate between human-generated and ChatGPT-generated content.

With the release of ChatGPT 4.5 anticipated between September and October, and rumors of ChatGPT 5.0 being ready by year’s end, I find it challenging to extrapolate the potential improvements without feeling apprehensive. What enhancements could this technology offer? How will its developers safeguard us from destructive usage such as mass disinformation or harmful research? What if it gained the ability to process real-time data, a feature currently disabled (the training dataset halts in late 2021)?

In conclusion, the situation does not seem to me, for the moment, neither black nor white. The current situation reminds me of that of a driver accustomed to driving Twingos who suddenly receives a Ferrari. To the wonder and apprehension succeeds a period of taming the machine by the man. Each of us, at our own scale, we too, are learning.

Leave a Reply

Your email address will not be published. Required fields are marked *