It is being said that MS will add ChatGPT to its MSOffice suite, which holds a dominant market share in business computing. More than a billion people use MSOffice to write documents, crunch numbers, create presentations, and send emails. If ChatGPT is part of that suite, users could also automatically generate text. ChatGPT is still in “beta” but it is already being touted as a “Google killer” though that is somewhat hyperbolic. If the app is asked a question in natural language, it responds with a natural-language essay, or precis, of what it considers relevant information culled from appropriate links. This sort of curated essay is easier for most users to understand than the classic search engine offering of links based on keywords. However, at least at this instant, ChatGPT lacks the human judgement required to distinguish fact from fake news or fiction.
ChatGPT can also write poems or short stories in the style of designated poets and writers. Even more impressively, it can generate code, or find open-source code on the basis of natural language instructions, thus reducing the need for the average computer user to know how to write code. MS owns GitHub, which is a repository of code, including much code that is free to use. Natural-language processing (NLP), on which ChatGPT is based, is at the cutting edge of AI research. By being trained on vast amounts of human-generated text and speech (called a Large Language Model), ChatGPT has learned to write sentences which seem organic. There are literally millions of ways in which NLP could be of benefit to humanity. A driverless vehicle which understands normal speech, for example, would provide more responsive rides. NLP could power health care services by asking patients preliminary questions and coming up with initial diagnoses based on their answers. As mentioned above, NLP could reduce the need for coders, and, given reliable translation skills, help speakers of one language generate high-quality content in another.
Unfortunately, there is a dark side to this. A recent experiment used AI to interact with mental-health patients without disclosure. AI picks up biases if there are biases in the training material. For instance, they could be great phishing tools. Such dangers will require careful navigation as ChatGPT and its competitors and successors gain in power and refinement.