- USDT(TRC-20)
- $0.0
This post is part of Lifehackerâs âLiving With AIâ series: We investigate the current state of AI, walk through how it can be useful (and how it canât), and evaluate where this revolutionary tech is heading next. Read more here.
You wouldnât be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.
Of course, people have been wondering if we could make machines that think for as long as weâve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germanyâs âunbreakableâ code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer couldâand wouldâbecome, imagining it as âone machine for all possible tasks.â
But it was what Turing wrote in âComputing Machinery and Intelligenceâ that changed things forever: The computer scientist posed the question, âCan machines think?â but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called âThe Imitation Game.â Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogatorâs goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, itâs not such a difficult task. But if one or both decides to lie, it becomes much more challenging.
But the point of the Imitation Game isnât to test a humanâs deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?
Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technologyâs practice, even if he never saw it come to fruition. Rosenblatt created the âPerceptron,â a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about somethingâsay, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it âlearnsâ the right answers instead of having to predict them.
That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the âweightâ of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully âlearned.â
Unfortunately, despite some promising attempts, the Perceptron simply couldnât follow through on Rosenblattâs theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasnât wrong: His machine was just too simple. The perceptronâs neural network had only one layer, which isnât enough to enable machine learning on any meaningful level.
Thatâs what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.
Through the 1990s and 2000s, researchers would slowly prove neural networksâ potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers werenât powerful enough to handle the amount of data necessary to see AIâs full potential. Mooreâs Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.
And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and weâve been on a wild ride ever since. We donât know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.
To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.
Full story here:
You wouldnât be blamed for thinking AI really kicked off in the past couple years. But AI has been a long time in the making, including most of the 20th century. It's difficult to pick up a phone or laptop today without seeing some type of AI feature, but that's only because of working going back nearly one hundred years.
AIâs conceptual beginnings
Of course, people have been wondering if we could make machines that think for as long as weâve had machines. The modern concept came from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germanyâs âunbreakableâ code produced by their Enigma machine during World War II. As the New York Times highlights, Turing essentially predicted what the computer couldâand wouldâbecome, imagining it as âone machine for all possible tasks.â
But it was what Turing wrote in âComputing Machinery and Intelligenceâ that changed things forever: The computer scientist posed the question, âCan machines think?â but also argued this framing was the wrong approach to take. Instead, he proposed a thought-experiment called âThe Imitation Game.â Imagine you have three people: a man (A), a woman (B), and an interrogator, separated into three rooms. The interrogatorâs goal is to determine which player is the man and which is the woman using only text-based communication. If both players were truthful in their answers, itâs not such a difficult task. But if one or both decides to lie, it becomes much more challenging.
But the point of the Imitation Game isnât to test a humanâs deduction ability. Rather, Turing asks you to imagine a machine taking the place of player A or B. Could the machine effectively trick the interrogator into thinking it was human?
Kick-starting the idea of neural networks
Turing was the most influential spark for the concept of AI, but it was Frank Rosenblatt who actually kick-started the technologyâs practice, even if he never saw it come to fruition. Rosenblatt created the âPerceptron,â a computer modeled after how neurons work in the brain, with the ability to teach itself new skills. The computer has a single layer neural network, and it works like this: You have the machine make a prediction about somethingâsay, whether a punch card is marked on the left or the right. If the computer is wrong, it adjusts to be more accurate. Over thousands or even millions of attempts, it âlearnsâ the right answers instead of having to predict them.
That design is based on neurons: You have an input, such as a piece of information you want the computer to recognize. The neuron takes the data and, based on its previous knowledge, produces a corresponding output. If that output is wrong, you tell the computer, and adjust the âweightâ of the neuron to produce an outcome you hope is closer to the desired output. Over time, you find the right weight, and the computer will have successfully âlearned.â
Unfortunately, despite some promising attempts, the Perceptron simply couldnât follow through on Rosenblattâs theories and claims, and interest in both it and the practice of artificial intelligence dried up. As we know today, however, Rosenblatt wasnât wrong: His machine was just too simple. The perceptronâs neural network had only one layer, which isnât enough to enable machine learning on any meaningful level.
Many layers makes machine learning work
Thatâs what Geoffrey Hinton discovered in the 1980s: Where Turing posited the idea, and Rosenblatt created the first machines, Hinton pushed AI into its current iteration by theorizing that nature had cracked neural network-based AI already in the human brain. He and other researchers, like Yann LeCun and Yoshua Bengio, proved that neural networks built upon multiple layers and a huge number of connections can enable machine learning.
Through the 1990s and 2000s, researchers would slowly prove neural networksâ potential. LeCun, for example, created a neural net that could recognize handwritten characters. But it was still slow going: While the theories were right on the money, computers werenât powerful enough to handle the amount of data necessary to see AIâs full potential. Mooreâs Law finds a way, of course, and around 2012, both hardware and data sets had advanced to the point that machine learning took off: Suddenly, researchers could train neural nets to do things they never could before, and we started to see AI in action in everything from smart assistants to self-driving cars.
And then, in late 2022, ChatGPT blew up, showing both professionals, enthusiasts, and the general public what AI could really do, and weâve been on a wild ride ever since. We donât know what the future of AI actually has in store: All we can do is look at how far the tech has come, what we can do with it now, and imagine where we go from here.
Living with AI
To that end, take a look through our collection of articles all about living with AI. We define AI terms you need to know, walk you through building AI tools without needing to know how to code, talk about how to use AI responsibly for work, and discuss the ethics of generating AI art.
Full story here: