AI and the Future of Tech: Unraveling the Mysteries of GPT-3

Listen to this article.

Lets be real: most of us can barley get though steeping our morning tea before hearing some mention of Chat GPT (I’m a poet and I didn’t know it). It’s the digital whisper in the winds, the news outlets practically hailing it as the harbinger of an educational Armageddon, the dawn of a brave new world where humans step down from their grand podium of superiority. That conservative relative we all have is telling you the robo-apocalypse is nigh, all while browsing “turtleneck sweaters for guinea pigs” online.

So, what is ChatGPT? Well, in simple terms, it’s a computer program that can understand and generate human-like text. But if it were as simple as that, would have I given up my extensive social opportunities to write this article? What am I saying, I live in Tasmania – most of my social interaction is with the goats down the road. I think we can all agree that the answer to this is probably yes, but to justify my sacrifice here (whatever the scale), let’s get into the nitty-gritty, the nuts and bolts, the ‘how does this even work?’ And for that, we need to dive into the concept of “large language models.”

Large language models like GPT-3 (Generative Pre-trained Transformer 3) and ChatGPT are powerful artificial intelligence systems that leverage deep learning techniques to process and generate human-like text. These models are trained on vast amounts of text data to develop a comprehensive understanding of language patterns, grammar, and contextual relationships.

AI consuming all knowledge, generated by Bing Creator

The training process for GPT-3 involves pre-training and fine-tuning, much like a university student diligently studying for their exams (and absolutely not going anything else but furthering their academic capacity during their time at university). The model learns to predict the next word in a sentence by analyzing massive amounts of publicly available text from the internet. It captures the statistical patterns and structures of language, allowing it to acquire a broad knowledge base.

Once training is complete, the model goes through a fine-tuning phase. This phase involves training the model on specific tasks or domains using labeled datasets. For example, GPT-3 can be fine-tuned to generate code snippets, answer questions, write essays, fix code, or engage in conversation (depending on how tragic your social life is – I will not comment on this in my case.

The underlying architecture of GPT-3 and ChatGPT is based on a Transformer model, which are deep learning models that excel in capturing long-range dependencies and contextual relationships in text. They employ self-attention mechanisms, allowing the model to weigh the importance of different words and phrases based on their relevance to the context. This attention mechanism enables the model to generate coherent and contextually appropriate responses based on user input.

The Power of Large Language Models

Language models, like GPT-3, are a type of machine learning model designed to generate human-like text. As mentioned, they are trained on a large corpus of text data and then generate text based on that training. If you ever come across this phrase, the “large” in “large language models” refers to the sheer size of these models, both in terms of the amount of data they are trained on, and the number of parameters they have.

GPT-3, for example, has been trained on hundreds of gigabytes of text and has 175 billion parameters. Yes, that’s a billion with a ‘B’! If you think about each parameter as a sort of ‘knowledge nugget,’ then GPT-3 has a lot of knowledge nuggets, enough to give a nutritionist a nervous twitch if these nuggets were made of chicken.

GPT-3 uses a type of model called a transformer, which is particularly good at understanding the context of a piece of text. This is what allows it to generate coherent and contextually relevant responses to prompts. It’s like having a conversation with a particularly well-read person who has an encyclopedic knowledge of a wide range of topics. That said, it’s important to understand that while GPT-3 is a spectacular beast in the world of AI, it isn’t without its quirks. Imagine it as a quirky genius, often impressively insightful but occasionally offering a head-scratching response that makes you wonder whether it missed its morning cup of tea.

While GPT-3 has been trained on a vast amount of information, it doesn’t truly “understand” in the way humans do. It’s more like a brilliant parrot that can mimic human speech patterns and produce text that sounds intelligent, without necessarily grasping the underlying meaning. It’s like a gourmet chef who can replicate any recipe flawlessly but can’t taste the food it prepares. So while GPT-3 can recite a Shakespearean sonnet or explain the principles of quantum physics, it doesn’t have an actual comprehension of the content.

One of the reasons GPT-3 can perform so well is due to its ability to make use of patterns in the data it was trained on. It’s able to spot the tiniest of details and correlations that even Sherlock Holmes would miss, and this capability is courtesy of the underlying architecture called the transformer, which, as we mentioned earlier, helps the model understand context. It’s the secret sauce that allows GPT-3 to dish out impressively coherent and contextually relevant responses.

Even though GPT-3’s 175 billion parameters—those ‘knowledge nuggets’ we mentioned—are a force to be reckoned with, the model isn’t infallible. Sometimes, it can generate text that is factually incorrect or nonsensical, a little like a sleep-talking scholar. This is because it’s basing its responses purely on patterns it has seen in the data it was trained on, not on a true understanding of the world.

In the grand scheme of things, GPT-3 is like an extraordinarily well-equipped toolbox, loaded with every tool you could imagine. But remember, no matter how sophisticated the toolbox, it’s the craftsman who knows how and when to use the tools that makes a masterpiece. That’s where we humans come into play, guiding and supervising the AI to ensure it is used responsibly and effectively.

What Does This Mean for the Future?

I know what you’re thinking: “that’s all rather grand, but what does it mean for me?”, and that’s a great question. The arrival of advanced AI models like GPT-3 are likely to have far-reaching implications, especially when it comes to jobs and the future of work.

Now, don’t panic just yet. AI is not here to take over the world or steal all of our jobs (yet). What it is likely to do, however, is automate certain tasks that are currently done by humans, and I think this is a good thing for humanity. This could mean anything from customer service chatbots that can handle more complex inquiries, to AI assistants that can draft emails or write code.

In the tech world, we’ve already started to see this. For instance, GitHub, the popular code hosting platform, introduced a feature called Copilot that uses a version of GPT-3 to suggest code completions to developers (and I ABSOLULTLY love this – why write boilerplate expressions that can be automated by a robot). The tool essentially serves as a pair programmer, helping to speed up the coding process and potentially reducing errors.

But it’s not just the tech world that will be affected. From journalism to legal work, to teaching and beyond, AI like GPT-3 has the potential to change the way we work and live. Anything that can be done by humans but could be done better (or would be better to be done) by a machine is where this kind of AI can assist.

While the idea of machines doing tasks traditionally performed by humans might sound a bit dystopian, it’s not all doom and gloom. Think about it this way: it’s like having a tireless assistant that doesn’t need breaks or vacations and can work around the clock. This has the potential to drastically increase productivity and efficiency in many fields.

Take, for example, the field of journalism. Imagine an AI tool that can pull in the latest news, write a draft article, and even suggest catchy headlines, all in the blink of an eye. Or consider the legal field, where an AI could sift through mountains of legal documents in a fraction of the time it would take a human, highlighting relevant information and even suggesting potential legal strategies. It’s like having a Mrs. Marple level detective on your team, minus all the murders that happen (rather inexplicably, if you ask me) in her village.

And that’s where the future really gets exciting. It’s not about AI taking over, but about humans and AI working together, each playing to their strengths. AI can handle the tedious, time-consuming tasks, freeing up humans to focus on the creative, complex, and emotionally nuanced aspects of their work. So, what does this mean for the future?

It means we’re likely to see more and more jobs involving some degree of interaction with AI. It means we’ll need to adapt, to learn new skills, and to find ways to work alongside these incredibly powerful tools. But most importantly, it means we have the opportunity to shape this future, to guide the development and use of AI in a way that benefits us all. Like a master craftsman with a sophisticated toolbox, it’s up to us to decide how best to use these tools, and what kind of world we want to build with them.

The Verdict

So, should we be excited or terrified? Well, like with most things in life, it’s not that simple. On the one hand, AI like GPT-3 has the potential to automate tasks, improve efficiency, and open up new possibilities that we can’t even imagine yet. On the other hand, these changes will also bring challenges.

AI devouring humanity, generated by Bing Creator

According to a report from Goldman Sachs, as many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that includes platforms like ChatGPT​. Administrative workers and lawyers are expected to be the most affected, with up to a quarter of all work potentially being done by AI completely. This could cause significant disruption in the labor market​.

But keep your chin up and the kettle boiled. Historically, technological innovation that initially displaces workers has also led to employment growth in the long run​. Moreover, widespread adoption of AI could ultimately increase labor productivity and boost global GDP by 7% annually over a 10-year period​.

Most jobs and industries are only partially exposed to automation (currently), and are thus more likely to be complemented rather than substituted by AI. The researchers estimate that for US workers expected to be affected, 25% to 50% of their workload “can be replaced, and the rest of their capacity can be directed towards other productive activities. This balance between automation and human work raises the possibility of a labor productivity boom similar to those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer​.

To sum it up, AI and large language models like GPT-3 are set to change the world in profound ways. Yes, there will be challenges, but there will also be opportunities. As tech enthusiasts, it’s our job to stay informed, to adapt, and to make the most of these changes. After all, as the old saying goes, the only constant in life is change. And in the world of tech, that change is often accompanied by a hefty dose of excitement. So, let’s embrace it, and see where this incredible journey takes us.

				
					if ('You Have Feedback' == true) {
  return 'Message Me Below!';
}
				
			
Picture of neobadger

neobadger

I'm a Technology Consultant who partners with visionary people who want to solve human problems using data and technology (and having fun doing it)!

SEND ME A MESSAGE

Want to dig a little deeper? Send me a message!
🎉 Nice work, that was a long article!