What is artificial intelligence?

In recent years, the words “artificial intelligence” have begun popping up everywhere: billboards, books, blog posts, job postings, and television shows. It all began in the early 20th century, with the “heartless” Tin Man from the Wizard of Oz. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds” (Anyoha). In 1968, the film 2001: A Space Odyssey was released with the famous HAL, an AI that goes rogue. From there, the references to AI in pop culture became nothing short of prolific.

But artificial intelligence did not remain confined to pop culture or science fiction. As the cost of computing became more and more affordable, artificial intelligence became more and more a topic of discussion. John McCarthy and Marvin Minsky hosted a conference (Dartmouth Summer Research Project on Artificial Intelligence, or DSRPAI) meant to bring the top minds together for “an open ended discussion on artificial intelligence” (Anyoha). In fact, the term “artificial intelligence” was created at this event. Though the event did not go quite as McCarthy planned, it still jump-started the next two decades of research. 

As the number of websites and personal computers increased, so did the amount of data collected. Artificial intelligence was a perfect solution: something able to process large amounts of data in short amounts of time, something that didn’t require human labor and thus was not subject to human error. And if these programs can learn as they go, become better at their job, they become the perfect solution. 

Now, AI is used in the spam filters of our email inboxes, in Roombas as they figure out the best route to clean a house, in the advancing field of self-driving cars, and in the increasingly common digital assistants found in homes all over the world. It is used in supply chains, farming, keeping livestock, ride-sharing, finances, and even infectious diseases (“Artificial Intelligence Today and Tomorrow”). 

AI isn’t perfect of course–if the data given to the AI is biased, the results will be as well. As said in the article “Artificial Intelligence Today and Tomorrow,” an example of AI data becoming biased was found when “some companies had begun using AI to comb through resumes and help make hiring decisions. Now some of those employers are rethinking the technology after research indicated the systems could be biased against women and minorities. If few women have held a job in the past, the computer might downgrade the applications of new female candidates.” One solution in process is having AI explain the rationale behind the decision it makes, referred to as “explainable AI.” Looking at the AI’s logic can help reveal any biases and allow developers to eliminate them from the system.

As AI is increasingly ingrained into our everyday lives, it is important to understand what AI is and what it isn’t. AI is a way of making our lives easier, a helpful tool when used correctly. It can be dangerous, but only if put in the wrong hands.

Sources
Adams, R.L. “10 Powerful Examples of Artificial Intelligence in Use Today.” Forbes, 10 January 2017.
Anyoha, Rockwell. “The History of Artificial Intelligence.” Science in the News, Summer 2017. Harvard University: The Graduate School of Arts and Sciences, 28 August 2017.
“Artificial Intelligence.” Google Ngram, chart generated 13 April 2021.
“Artificial Intelligence.” MerriamWebster.com Dictionary, Merriam Webster, 2 April 2021.
Artificial Intelligence Today and Tomorrow.Senate RPC, 27 February 2020.
“Artificial Intelligence in Fiction.” Wikipedia, 10 April 2021.
Reynoso, Rebecca. “A Complete History of Artificial Intelligence.” Learning Hub, 1 March 2019.

YOUR COMMENT

Loading ...