While artificial intelligence (AI) is making headlines today, the concept of "thinking machines" can be traced back to ancient philosophers and mathematicians. The phrase artificial intelligence was first used in the mid-20th century, at the dawn of the computer age. Like most transformative innovations throughout history, AI's possibilities have been met with a combination of excitement, skepticism, and fear.
We've been receiving several questions about AI from clients, so we thought we would answer some of the more common.
What is AI?
AI is a branch of computer science that focuses on giving machines or computer-controlled robots the ability to execute intelligent tasks. AI aims to create intelligent machines that can replicate human behavior by programming them to think and learn like people.
Is AI a new concept?
Many AI applications are new, but the theory and some basic technology have existed for years. Innovators have built on their predecessors' work and brought us to the AI inflection point we are at today. Here's a brief history of AI and how it has improved people's lives over time.1
Keep in mind that any companies are mentioned for descriptive purposes only. It should not be considered a solicitation for the purchase or sale of their securities. Any investment should be consistent with your objectives, time frame, and risk tolerance.
In 1950, Claude Shannon, "the father of information theory," published "Programming a Computer for Playing Chess," the first article to discuss the development of a chess-playing computer program.
That same year, the idea of modern AI was first proposed by Alan Turing, who developed the concept of the Turing test, designed to determine whether a machine could convincingly imitate human conversation.
In 1955, Allen Newell, Herbert Simon, and Cliff Shaw co-authored Logic Theorist, the first AI computer program.
The following year, the Dartmouth Workshop brought together leading figures in AI, marking the birth of AI as a formal academic field.
In 1961, Unimate, an industrial robot, became the first to work on a General Motors assembly line. This marked a significant step in the application of AI in industry.
The first chatbot, called ELIZA, was created at MIT in 1965. ELIZA was an interactive computer program that could functionally converse with a person in English, opening up new possibilities for human-computer interaction.
The 1970s saw accelerated AI advancements, mainly focused on robots and automatons. AI research also branched into more specific areas, such as problem-solving, genetic algorithms, and expert systems.2
In 1970, WABOT-1, the first anthropomorphic robot, was built in Japan at Waseda University. Its features included movable limbs and the ability to see and converse.
Work began on MYCIN at Stanford University in 1972. MYCIN used AI to diagnose blood disorders in patients based on reported symptoms and medical test results. A breakthrough in medical care, MYCIN operated at roughly the same level of competence as human specialists in blood infections and much better than general practitioners.
The rapid growth of AI continued through the 1980s, despite a period of reduced interest known as the "AI winter."
WABOT-2 was built in 1980 and allowed the humanoid robot to communicate with people, read musical scores, and play music on an electronic organ.
In 1986, Mercedes-Benz released a driverless van. A predecessor to today's technology, it could drive up to 55 mph on a road without obstacles or human drivers.
AI technology started to permeate everyday applications by the 1990s.3
In 1994, Brian Pinkerton developed WebCrawler, the first full-text crawler-based Web search engine. It was the first search engine that allowed users to search for any word on a web page, which changed the standard for all future search engines.
In 1997, computer scientists developed long short–term memory (LSTM), a recurrent neural network (RNN) architecture for handwriting and speech recognition.
A significant milestone in AI occurred when IBM's Deep Blue supercomputer defeated world chess champion Garry Kasparov in 1997.
AI became prevalent across various applications throughout the 2000s. Spam filters began using machine learning to filter unwanted emails, improving the email experience. Face detection started to be implemented in cameras, improving photography. Machine learning algorithms also began to detect fraudulent financial transactions.1
This decade saw the development of personal AI assistants, eventually bringing the technology to a broad consumer audience. Additionally, Google's search algorithm started to utilize AI, improving search results and revolutionizing access to information.
The widespread use of AI and machine learning increased exponentially during this decade. AI improved search engine functionality, voice assistants, and self-driving cars. AI also started being used more extensively in healthcare for diagnostics, risk analysis, and treatment plans.1
In 2010, Microsoft launched Kinect for Xbox 360, the first gaming device that tracked human body movement using a 3D camera and infrared detection.
The following year, Apple released Siri, a virtual assistant with a natural-language user interface to infer, observe, answer, and recommend things to its human user. Amazon launched Alexa, its home assistant, a few years later in 2014.
A humanoid robot named Sophia, created in 2018, became the first "robot citizen" because of her likeness to an actual human being, with her ability to see, make facial expressions, and communicate through AI.
AI use has grown exponentially, with applications in almost every industry and use cases ranging from weather modeling to improving remote learning and working.4
In 2022, OpenAI launched ChatGPT. Based on the generative pre-trained transformer (GPT) architecture, ChatGPT can generate human-like responses in conversational settings. It has been trained on a vast amount of text data from the internet, allowing it to understand and generate coherent and contextually relevant responses to a wide range of questions submitted by a user.
We want to remind you that any companies are mentioned for descriptive purposes only. It should not be considered a solicitation for the purchase or sale of their securities. Any investment should be consistent with your objectives, time frame, and risk tolerance.
What is the Future of AI?
Because of its ability to quickly assimilate massive quantities of data and use generative learning to refine its abilities to make them more useful, АI has applications in almost every field. Here are a few of the industries we think AI will influence over time:4
- Health Care
- Financial Services
- Customer Service
Should I be Concerned About AI Taking Over the World?
Could AI infuse machines with capabilities that outstrip those of humans? It's true that computers may analyze data and respond to queries at speeds far exceeding our human capacities. However, this doesn't automatically equate to a higher level of expertise. Hence, the scenario of machines overrunning our world remains largely a matter of science fiction.
Rather than viewing AI as a potential menace, we should acknowledge it as the potent tool it truly is. Its groundbreaking influence extends far beyond the corporate world, inspiring innovative developments in fields as diverse as healthcare, education, and more. This transformative potential of AI to redefine various aspects of our lives underscores its significant role in shaping the future of our interconnected world.
There are, however, legitimate concerns about AI's potentially disruptive impact on the economy, jobs, ethics, security, accountability, and overall societal well-being. Lawmakers, regulators, industry leaders, and researchers should work together to address these concerns and develop responsible AI practices and policies. Unilateral action to slow the adoption and development of AI may put the U.S. at a competitive disadvantage, so any action needs to consider all implications.
Will AI Impact the Way I Interact with my Financial Professional?
The role of AI in financial services has been increasing and will likely continue to expand. AI allows firms to automate certain aspects of running the business, especially back office and operational functions. Financial professionals will use AI as they have other new technology over the years, allowing them to be more efficient and spend more time with clients.
As you know, the most important financial decisions are rarely black or white and usually do not have simple right or wrong answers. Providing financial guidance is much more nuanced and informed by each client’s goals, time horizon, risk tolerance, and, of course, emotional factors.
Financial professionals know how to balance financial data with common sense and empathy. Relationships with clients are built on trust, effective communication, and setting realistic expectations.
While AI will likely be able to help with most of the data-driven technical aspects of financial services going forward, the human element is expected to be essential.
We are Always Here to Answer Your Questions
As we have seen throughout history, not all new technologies live up to their potential. But some far outperform expectations and change the way we live. The personal computer, the internet, and cell phones are a few we have seen in our lifetimes. Will AI be the next? Maybe. It is one of the more transformative technologies of the 21st century so far, and we will continue to learn of new ways in which it impacts businesses and even everyday life.
Even as AI continues to transform the way in which we navigate our lives, be it in personal decision-making or business strategies, it accentuates the importance of the human touch, highlighting the value of the kind of nuanced understanding and personalized approach that a good financial professional can provide. If you have any questions or concerns about how AI intersects with your financial journey, please do not hesitate to contact us.
1G2.com, May 25, 2021. "A Complete History of Artificial Intelligence."
2 Britannica.com, 2023
3 Soft-Surge.com, 2023. "A Brief History of Web Crawlers."
4 MITTechologyReview.com, March 3, 2023