According to the Oxford Languages Dictionary, AI can be defined as,“ the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

In the book Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig, another set of definitions could be categorized into two approaches, Human and Ideal. The Human approach defines AI as systems that think like humans or that act like humans. The Ideal approach defines AI as systems that think or act rationally.

The AI field is a combination of Computer Science and datasets that enables problem-solving, but it also has sub-fields of machine learning and deep learning. The combination of these fields produces AI algorithms that seek to create systems that make predictions based on input data— like ChatGPT. The AI discussion has had hype cycles and since its release to the public, its focus on natural language processing has kept the attention of the experts and the public alike consistent.  Natural language processing involves giving computers the ability to understand text and spoken words in largely the same way human beings can. To achieve this, AI has three different learning stages. 

Different Types of Learning in AI

The first stage is the simplest type of learning AI can perform. It can be labeled as Artificial Narrow Intelligence (ANI), narrow AI, or weak AI, which refers to the ability of the machines to apply intelligence to one specific task type, often involving pattern-matching. This is the most common implementation of AI. Virtual assistants are an example of narrow AI, like Apple’s Siri or Amazon’s Alexa. It can also be found in self-driving cars. Recently, ChaptGPT has grown a big reputation as weak AI and falls under a subcategory called Generative AI, which will be discussed later in the blog. 

The second stage is the current emerging type of learning that AI will eventually be capable of thinking and making decisions like humans if development maintains its current state. It is labeled as Artificial General Intelligence (AGI) or strong AI. In this stage, machines possess the ability to think and make decisions like humans. Many argue that there is still a need for debate and research to fully develop accurate AGI systems. However, it seems that we are approaching a reality where the theoretical possibilities of AI will actually become machine qualities.

The third stage has not been reached yet or ever. In this stage, AI has the ability to surpass human intelligence. It is referred to as Artificial Super Intelligence (ASI). This would be the inspiration for classic science fiction movies in which machines take over humanity. Although we are just approaching the second stage of AI learning, if the pace continues with its current momentum, AI may reach stage 3 sooner than anticipated. 

The Functionalities of AI Systems

The implementation of AI differs between the five types of systems and their functionalities. Implementation started simple and has gradually evolved to become more complex. I have noted that there is an “evolutionary” pattern in these functionalities. Another observation is that three of the five types of systems are considered weak AI, while the other two would still need to be assessed to determine their type of learning. 

The first system is reactive machines, which can be considered narrow AI. They operate solely on the present data, which accounts only for the current situation. They are recreational and are not designed to create inferences. If given a specific input, the output would always be the same. For instance, a successful implementation of reactive AI was to write a math formula for movie recommendations—which became a factor in Netflix’s success. More so, it was the ability of the AI to intake and consider huge amounts of data and produce intelligent output, which in Netflix’s example, was a math equation. Another example was the famous IBM chess program that beat the world champion of chess, Garry Kasparov. This type of functionality was the first implementation of AI. But reflecting on our human experiences, most of our actions are not reactive because we might not have all the information we need to perform an action. What makes us superior to machines is that we have the ability to anticipate and prepare for the unexpected with little information or even imperfect information. This ability was the next step for AI, which continued evolution to limited memory AI. 

Limited memory AI is another type of narrow AI that is an improvement from reactive AI. It has the ability to improve over time by studying past data from its memory. With this AI, it performs future actions based on imperfect data. An example can be seen in self-driving cars. The AI uses sensors in the vehicle to identify its environment to make better driving decisions. Even though this is a noticeable improvement from reactive AI, it still has its limitations. One noticeable condition is that it needs huge amounts of data to learn unsophisticated tasks. 

The third type of AI is newer and more challenging to classify because it is an enhanced limited memory AI, called Generative AI. This AI can generate text, audio, images, videos, or other media. A popular example was OpenAI’s ChatGPT which was released in late 2022. It can call on past answers to update current output. This seems to be more of a science fiction implementation of AI and that is actually self-aware AI with the potential to possess consciousness.

The other two types of AI-based systems are the theory of mind AI and self-aware AI which have not been fully developed but are currently being heavily researched. These two types would be considered strong AI. Specifically, theory of mind AI would be a more advanced type compared to what has been developed thus far. It would have the capability to understand emotional intelligence so that human beliefs and thoughts can be better comprehended. 

The Unpredictability of AI

What is Artificial Intelligence

After researching Google Bard and Amazon’s AI incident, a concern that seems apparent is that AI can be unpredictable. Google Bard was not designed to translate between different languages but after releasing it to the public, it taught itself to translate different languages. The CEO himself said that they are still learning more about their tech every day. Another example occurred in 2018 when Amazon’s AI discriminated against female applicants. 

AI is designed with a predetermined goal that could be content creation or analyzing data. The concern of copyright infringement and unethical practices arises because machines are not concerned about ethics or morals like humans are. AI is unpredictable and can therefore create unpredictable consequences, which leads us to another important question… do we trust the development companies of AI? 

Can We Trust AI Developers?

After a recent lawsuit against the creator of the generative AI tool ChatGPT for stealing huge amounts of private data, we can’t help but question, “Should we trust these developers during a time when most companies are trying to win the innovation race?” This lawsuit seems to be hinting at the cost that some are willing to take for the sake of profit and innovation—in this case, your private data. 

It could be argued that developers like Google, OpenAI, and Microsoft can not be fully trusted yet with this technology because the product is still new and unpolished, but I view it as a prototype car. People would not buy prototype vehicles without any safety regulation compliances. Similarly, businesses are not being regulated for this AI technology because it is new. Companies are making the standards as they go which is not reassuring. 

Who is the Governing Body?

Since the technology is new, there is no governing body at the moment. Companies have let the public know that they have allocated resources to their internal teams to oversee the ethics and responsibilities of the company during the development of AI. Although this can be reassuring, more regulation needs to be put into practice. There needs to be third-party oversight because the lawsuit mentioned above has shown that simply having an internal ethic oversight is not enough. Currently, a small number of large corporations have the market power of AI. With more companies jumping in on the AI trend, the government needs to step in and regulate the development of AI technology due to its unexpected effects on consumers. 

AI’s Affect on Consumers

AI’s effect on consumers parallels the effect automation technology had on the workforce back when manufacturing companies introduced industrial robots to perform repetitive tasks—which resulted in displaced workers. Essentially, we are entering a new wave of automation technology, which will impact both businesses and the average consumer. 

AI in Business

Even though at the moment, headlines say that AI will boost productivity and efficiency, among other things, those benefits shadow the challenges that AI will bring. The first issue that was mentioned was the displacement of knowledge workers. According to the Oxford Language Dictionary, a knowledge worker is a person whose job involves handling or using information. Analysts, technology developers, researchers, lawyers, and education professionals are just a few examples of knowledge workers. Some argue that AI will replace humans in certain professions; others believe that jobs will be redefined so that humans work alongside AI. While both predictions are concrete realities, AI technology will not dethrone human workers. 

There are examples of AI implementations that have gone wrong, from human resources all the way to lawyers being fined for using AI. In 2018, Amazon had to remove its experimental AI hiring tool because it had developed a bias against women. The problem arose because the data that the AI was given was based on previous resumes over the last decade. The AI then developed a pattern in which it sought after male applicants because based on the data, male applicants were the dominant demographic of IT applicants. It penalized resumes that included the word “women’s”. In a more recent incident, two lawyers were fined 5,000 dollars for submitting falsified legal research generated by ChatGPT. The citations of the research were found to be based on made-up cases. These examples come to show that AI is here to stay, but is not yet reliable to be fully implemented by businesses that are not in AI development. 

Another issue that has been brought to light has been the intellectual property risks that AI has introduced. The first risk is the high potential of copyright infringement claims from third parties. Images that are copyrighted and used by generative AI tools can result in unlawful infringement lawsuits. There is also a risk that companies might not be able to copyright the output produced by their own AI because it was not created by a human. Another potential risk that is a little more complicated is that AI is based on the terms of usage of the developers. This means that the company could own the output or the developer.

The complications that AI can introduce to businesses are extensive and that is why businesses should use AI with caution. 

AI Among the General Public

How will the average Joe be affected by AI? It is too soon to say, but innovation has always had its downsides. We just don’t know for certain the full implications of AI currently because it is so new. Look at it compared to an iPhone. While it revolutionized the way we communicate and interact with technology and people, it still had negative consequences. 

For instance, the rise of the iPhone and its technology has developed a phone addiction with endless hours of scrolling, contributing to the anxiety of many users. The elderly have also been impacted. With the use of smartphones, scammers have been able to scam the elderly population by sending phishing emails and taking advantage of their limited understanding of technology. 

The bottom line is that the obvious benefits of AI will be appealing to the average user but we should be vigilant on the effect that it has on the population long term. Users are already using ChatGPT to create top-tier resumes but what some users fail to realize is they are mindlessly giving up their private information to these AI companies. Another consequence that needs to be studied is the potential problem of people hiding their incompetencies with AI. As of right now, people can pretend to know a skill by just asking ChatGPT to do it for them. Overall, the job market overall will be affected and people will have to adapt and obtain new skills to stay competitive with the ever-changing market demands. 

History has shown time and time again that those who fail to adapt and learn will be left in the past. Companies and the regular Joe need to be careful and vigilant to these new changes; the technology is still new without any governing oversight and that is a risk not worth taking.