AI going to change the world?

The power and implications of AI

The technologies we’re witnessing are powerful, impressive and rapidly evolving. Everything you’ve seen on the screen, for example, is footage from OpenAI’s new video-generating model, Sora. But let’s peel back the algorithmically patterned wallpaper for a moment and look at the structure behind it. The AI of the 2020s isn’t new. But its consequences are. If you’re watching this, they’ve already affected you.

Public response to AI

So how should we, the public, respond to tools that rely on more data than we can ever fathom? How can they change our relationship with work? And… do we need to panic? I’m Matt Ferrell… welcome to Undecided. This article is brought to you by Brilliant, but more on that later. A lot of people are both excited and scared about the state of AI right now, and rightly so. However, one of my goals with this channel is to give you reasons to remain optimistic. Today I’ll try to put the recent explosion of interest in AI into context.

Defining AI

Before we get started, let me be clear. When I use the word “AI”, I’m referring specifically to generative AI. This includes large language models, or LLMs, like ChatGPT, and image generators like Midjourney. Basically, these programs are designed to perform specific tasks. And to describe the way they work as simply as possible, they identify patterns. If they find patterns in a given input that match the data they’ve been trained on, they use that data as a springboard to create a new output. At least, that’s the idea. It’s important to note that these tools are not examples of Artificial General Intelligence (AGI), or the Marvins and HALs of sci-fi spaceships. They’re much narrower than that. Overconfident or not, tech companies recognise that AGI is still a goal.

AI isn’t new

My main goal with this video is to add nuance to larger conversations about AI as a whole. So let me start by reminding you: AI isn’t new. I know this statement may seem obvious to some, and confusing to others, so let me clarify. In fact, I have a few friends who can help me with this.

“Most people don’t realise this, but AI has been around for a long time, helping to solve all sorts of problems in all sorts of industries. Before I started my channel, I spent eight years as a rocket scientist at MIT, and part of my job was to use machine learning algorithms to help space telescopes and long-range radars detect really small and fast-moving objects. We ended up building some neural networks and training them to understand what to look for and what to ignore.”

“Let’s start with my experience before the 2020s. I was a software engineer at Salesforce, and we had this product called Einstein, which brought AI to your data. It was a lot like Netflix’s recommendations. If you’re watching a show, it’ll tell you, ‘Hey, you’ll also like this show.’ But it was largely pattern-based.” For reference, Einstein was launched in 2016.

Historical context of AI

But we can go back even further than the 2000s. Researchers have been working on what we now call generative AI for a lot longer than you might think. Let me tell you about when I first met ELIZA. Our first family computer was a Commodore 64. Yes… 64KB of RAM with no floppy or hard drive of any kind. My brother, Sean, and I would spend hours in our little upstairs playroom nook, plugging in lines of code from a book of BASIC. ELIZA is one programme that’s stayed with me all these years. It would ask you questions and then follow up your answers in the style of a Rogerian psychologist. This was important to the illusion because Rogerians encourage therapy patients to do most of the talking. The technical trick behind the scenes was that ELIZA was looking for “keys” in your sentence. In other words, it looked for patterns.

For example:
“What did you do today?”
“I played with a Hot Wheels car.”
“Tell me more about the Hot Wheels car.”

For a little kid in the 1980s, this was mind-blowing, and it felt like you were talking to something alive inside the computer… until you turned it off and lost the whole program. Sound familiar? In any case, ELIZA is just one of many in a long line of precursors to the chatbots we know today. And if you look at collective reactions to these kinds of programs throughout history, you’ll see that the human tendency to humanise AI helps perpetuate misconceptions about its capabilities.

AI-change-the-world

The Turing Test and Eugene Goostman

For another example, we can look at the persona of the chatbot known as “Eugene Goostman”. You’ve probably heard of Turing tests, which are basically an interpretation of a concept famously discussed by the mathematician Alan Turing. In a seminal 1950 paper, he proposed a theoretical “imitation game” to determine the ability of a machine to behave indistinguishably from a human. Since then, various groups have organised competitions with panels of judges to assess the “humanity” of chatbots – though it’s important to note that Turing tests have no universally agreed rules, and not everyone finds this form of assessment valuable.

Comparison of past and present AI

So is this cast of characters so far removed from what we’re dealing with now? Yes and no. Yes, in the sense that, broadly speaking, these bots of yesteryear operated within systems that directly involved human hands, whether through programming languages or mimicking inputs from crowd-sourced conversations. This is different from today’s popular big language models that use machine learning. More specifically, it’s the “deep” kind of learning: AKA neural networks. The point of these networks is to simulate the human brain, allowing AI systems to “learn” with less intervention. The chatbots that have been making waves in recent years are built on a different foundation, yes. But those foundations are just as old.

Early AI development

In the context of US history, it was in 1943 that scientists Warren McCulloch and Walter Pitts laid the mathematical foundations for an algorithm to classify input data. You know… the same kind of tasks you do every time a website asks you to complete a CAPTCHA to prove your humanity. Then, in 1957, psychologist Frank Rosenblatt further developed what would become the basis of neural networks with what he called “the perceptron”. He then married maths to metal by building a “Mark I” version of the machine. Its purpose? To recognise images.

News articles about AI

So let’s take a moment to read some news. Here’s some quotes from the introduction to a New York Times article on machine learning: “Computer scientists, taking cues from how the brain works, are developing new kinds of computers that seem to have an uncanny ability to learn on their own. … The new computers are called neural networks because they contain units that function roughly like the brain’s intricate network of neurons. Early experimental systems, some of them eerily human-like, are inspiring predictions of astonishing advances”. Oh, wait, hang on. This paper is dated… 1987. Right around the time I was punching the ELIZA code into my Commodore 64.

AI-going-to-change-the-world

Personal experience of AI

To give you a more recent insight into how long we’ve been tinkering with machine learning, I can talk about my own career. I used to work on competitive multiplayer games. You could win prizes by beating other players, so there was a huge incentive for people to cheat. To combat this, the development team created bot detection systems. They would allow us to analyse move history data from previous matches, which would reveal the subtle differences between how humans and cheat programs play. It was quite effective. But we needed human data to make a comparison. And like the chatterbots of yesteryear, our modern bards and copilots rely on human data to function. Whether it’s a quirky conversationalist in the 1980s or a budding assistant in the 2010s, AI systems interpret vast amounts of information and make their best guesses about what to do with it. Without all the data we produce, they can’t do much. And that’s part of the problem.

Brilliant sponsorship

Wrapping your head around the concepts of AI and LLMs can be overwhelming. That’s why I spent some time going through the new ‘How LLMs Work’ course from today’s sponsor, Brilliant. It’s hands-on, with real code examples to illustrate what’s under the hood of today’s cutting-edge LLMs and how they work. You can try out the full How LLMs Work course now with a 30-day free trial, plus the first 200 people will get 20% off Brilliant’s premium annual subscription. Just visit brilliant.org/Undecided.

Ethical dilemmas of AI

You may not realise it, but the constant collection of human data raises a number of ethical dilemmas that we’ve never fully resolved. And the advances that have been made are often far from perfect. For example, most big language models suffer from bias due to the data they’re trained on, even though they’re supposed to be neutral. At their worst, they’re used to doing things we’d rather they didn’t. The Goostman bot I mentioned earlier was released to the public in 2014. Around that time, hackers attacked a nuclear power plant in Germany in what became known as the Gundremmingen cyberattack. A member of the team later revealed to the media that they had deliberately staged the attack to test Eugene’s limits. If that’s not enough to send shivers down your spine, there’s no shortage of stories about AI exploits affecting modern systems.

AI in society

I’m not bringing this up to scaremonger. I still believe we can and should use AI to improve people’s lives. But we need to be clear-eyed about its potential misuse, even though we’ve faced these challenges before.

AI in video games

To bring this conversation back to the more positive sphere, I’d like to highlight some amazing uses of AI in the world of video games, as this is my own personal bias. It’s one of the most accessible ways for people to get a more nuanced understanding of what AI is and how it works. Video games are also great at demonstrating the consequences of how AI works. Games like Civilization and XCOM have even incorporated procedural storytelling elements that adapt to player choices. Procedural storytelling itself isn’t a new concept. “Rogue was released in 1980 and is considered one of the first roguelikes to use procedural generation for level design and item placement.

AI in art

Speaking of art, AI-generated art is something else worth mentioning. The tools now available to artists can create some truly stunning pieces. But the controversy surrounding the use of AI to create art is also very real. I’ve seen some incredible artwork generated by AI, but the ethical concerns about the datasets used to train these models cannot be ignored. Artists deserve to be compensated and credited for their work, and we need to figure out how to balance this as AI continues to advance.

Conclusion

Ultimately, AI is not something to be feared, but it’s not something to be taken lightly either. We’ve been living with AI for decades, and while the tools and technologies have evolved, the core principles remain the same. By understanding the history and current landscape, we can better navigate the future of AI and use it responsibly to improve our lives. So let’s not pretend that AI is new, but let’s also not ignore the new implications and opportunities it brings. Thanks for watching, and I’ll see you next time.


Discover more from Comprehensive Product Reviews: Your Trusted Source

Subscribe to get the latest posts sent to your email.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *