OpenAI’s development of large language models (LLMs) and interactive chatbots has given rise to various AI solutions. The most famous of these is OpenAI’s own ChatGPT, but while that is currently powered by GPT-4, it previously released with GPT-3.

But while GPT-3 was less advanced, it was still impressive.

When the text-generating algorithm GPT-2 was created in 2019, it was labeled as one of the most “dangerous” A.I. algorithms in history. In fact, some argued that it was so dangerous that it should never be released to the public (spoiler: It was) lest it ushers in the “robot apocalypse.” That, of course, never happened. GPT-2 was eventually released to the public, and after it didn’t destroy the world, its creators moved on to the next thing. But how do you follow up the most dangerous algorithm ever created?

You make a sequel. Instead of one nigh-indestructible machine sent back from the future in Terminator? Give audiences two of them to grapple with in Terminator 2: Judgment Day.

The same is true for A.I. — in this case, GPT-3, a recently released natural language processing neural network created by OpenAI, the artificial intelligence research lab that was once (but no longer) sponsored by SpaceX and Tesla CEO Elon Musk.

GPT-3 is the latest in a series of text-generating neural networks. The name GPT stands for Generative Pretrained Transformer, referencing a 2017 Google innovation called a Transformer which can figure out the likelihood that a particular word will appear with surrounding words. Fed with a few sentences, such as the beginning of a news story, the GPT pre-trained language model can generate convincingly accurate continuations, even including the formulation of fabricated quotes.

This is why some worried that it could prove itself to be dangerous, by helping to generate false text that, like deepfakes, could help spread fake news online. Now, with GPT-3 it’s bigger and smarter than ever.

GPT-3 is, as a boxing-style “tale of the tape” comparison would make clear, a real heavyweight bruiser of a contender. OpenAI’s original 2018 GPT had 110 million parameters, referring to the weights of the connections which enable a neural network to learn. 2019’s GPT-2, which caused much of the previous uproar about its potential malicious applications, possessed 1.5 billion parameters. In February 2020, Microsoft introduced what was then the world’s biggest similar pre-trained language model, boasting 17 billion parameters. 2020’s monstrous GPT-3, by comparison, has an astonishing 175 billion parameters. It reportedly cost around $12 million to train.

“The power of these models is that in order to successfully predict the next word they end up learning really powerful world models that can be used for all kinds of interesting things,” Nick Walton, chief technology officer of Latitude, the studio behind A.I. Dungeon, an A.I.-generated text adventure game powered by GPT-2, told Digital Trends. “You can also fine-tune the base models to shape the generation in a specific direction while still maintaining the knowledge the model learned in pre-training.”

The computational resources needed to actually use GPT-3 in the real world make it extremely impractical.

Gwern Branwen, a commentator and researcher who writes about psychology, statistics, and technology, told Digital Trends that the pre-trained language model GPT represents has become an “increasingly a critical part of any machine learning task touching on text. In the same way that [the standard suggestion for] many image-related tasks have become ‘use a [convolutional neural network], many language-related tasks have become ‘use a fine-tuned [language model.’”

OpenAI — which declined to comment for this article — is not the only company doing some impressive work with natural language processing. As mentioned, Microsoft has stepped up to the plate with some dazzling work of its own. Facebook, meanwhile, is heavily investing in the technology and has created breakthroughs like BlenderBot, the largest ever open-sourced, open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators. As anyone who has used a computer in the past few years will know, machines are getting better at understanding us than ever — and natural language processing is the reason why.

But OpenAI’s GPT-3 still stands alone in its sheer record-breaking scale.“GPT-3 is generating buzz primarily because of its size,” Joe Davison, a research engineer at Hugging Face, a startup working on the advancement of natural language processing by developing open-source tools and carrying out fundamental research, told Digital Trends.

The big question is what all of this will be used for. GPT-2 found its way into a myriad of uses, being employed for various text-generating systems.

Davison expressed some caution that GPT-3 could be limited by its size. “The team at OpenAI have unquestionably pushed the frontier of how large these models can be and showed that growing them reduces our dependence on task-specific data down the line,” he said. “However, the computational resources needed to actually use GPT-3 in the real world make it extremely impractical. So while the work is certainly interesting and insightful, I wouldn’t call it a major step forward for the field.”

Others disagree, though. “The [artificial intelligence] community has long observed that combining ever-larger models with more and more data yields almost predictable improvements in the power of these models, very much like Moore’s Law of scaling compute power,” Yannic Kilcher, an A.I. researcher who runs a YouTube channel, told Digital Trends. “Yet, also like Moore’s Law, many have speculated that we are at the end of being able to improve language models by simply scaling them up, and in order to get higher performance, we would need to make substantial inventions in terms of new architectures or training methods. GPT-3 shows that this is not true and the ability to push performance simply through scale seems unbroken — and there’s not really an end in sight.”

Branwen suggests that tools like GPT-3 could be a major disruptive force. “One way to think of it is, what jobs involve taking a piece of text, transforming it, and emitting another piece of text?” Branwen said. “Any job which is described by that — such as medical coding, billing, receptionists, customer support, [and more] would be a good target for fine-tuning GPT-3 on, and replacing that person. A great many jobs are more or less ‘copying fields from one spreadsheet or PDF to another spreadsheet or PDF’, and that sort of office automation, which is too chaotic to easily write a normal program to replace, would be vulnerable to GPT-3 because it can learn all of the exceptions and different conventions and perform as well as the human would.”

Ultimately, natural language processing may be just one part of A.I., but it arguably cuts to the core of the artificial intelligence dream in a way that few other disciplines in the field do. The famous Turing Test, one of the seminal debates that kick-started the field, is a natural language processing problem: Can you build an A.I. that can convincingly pass itself off as a person? OpenAI’s latest work certainly advances this goal. Now what remains is to be seen what applications researchers will find for it.

“I think it is the fact that GPT-2 text could so easily pass for human that it is getting difficult to hand-wave it away as ‘just pattern recognition’ or ‘just memorization,’” Branwen said. “Anyone who was sure that the things that deep learning does is nothing like intelligence has to have had their faith shaken to see how far it has come.”

GPT-3 has since been superseded by GPT-3.5 and GPT-4.

Related Posts

RTX 5060 Ti price drop finally makes sense for budget gaming pcs

Now that this SFF-ready RTX 5060 Ti is down to $332.99 (a 29% cut), the value finally lines up with what the card actually does. At this price, it becomes a genuinely interesting pick for a budget or midrange gaming pc, especially if you’re building in a smaller case.

Blue Yeti USB mic drops to $84.97 in early streaming gear deal

get the deal

Logitech just gave your wallet some good news

Faber said Logitech is done with major price increases for now, as its supply-chain tweaks have helped stabilize costs.