GET THE APP

Computer Artificial Intelligence: History and Types
..

International Journal of Sensor Networks and Data Communications

ISSN: 2090-4886

Open Access

Mini Review - (2023) Volume 12, Issue 1

Computer Artificial Intelligence: History and Types

Vahid Rakhshan*
*Correspondence: Vahid Rakhshan, Department of Psychology, University of Catholic Milan, Mila, Italy, Email:
Department of Psychology, University of Catholic Milan, Mila, Italy

Received: 31-Dec-2022, Manuscript No. sndc-23-92865; Editor assigned: 03-Jan-2023, Pre QC No. P-92865; Reviewed: 14-Jan-2023, QC No. Q-92865; Revised: 20-Jan-2023, Manuscript No. R-92865; Published: 28-Jan-2023 , DOI: 10.37421/2090-4886.2023.12.199
Citation: Rakhshan, Vahid. “Computer Artificial Intelligence: History and Types." J Sens Netw Data Commun 12 (2023): 199.
Copyright: © 2023 Rakhshan V. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

Artificial Intelligence (AI) is a rapidly growing field in computer science that involves the creation of intelligent machines capable of performing tasks that typically require human intelligence, such as understanding natural language, recognizing images, and learning from experience. AI has been used to develop intelligent systems that can perform a wide range of tasks, from analyzing vast amounts of data to driving cars autonomously. This article will explore the basics of computer artificial intelligence, its history, types, applications, and future prospects.

Keywords

Artificial intelligence • Translating languages • Human intelligence

Introduction

The history of AI can be traced back to ancient times, when philosophers such as Aristotle and Descartes speculated about the nature of the human mind and its relationship to the body. However, the formal study of AI began in the 1950s, when computer scientists started to develop algorithms and programs that could mimic human intelligence. The term "artificial intelligence" was first coined in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at the Dartmouth Conference. The conference marked the beginning of AI as a formal academic discipline, and it brought together researchers from various fields, including computer science, psychology, and mathematics, to discuss the possibilities of creating intelligent machines [1].

Literature Review

Over the next few decades, AI research progressed rapidly, and many important breakthroughs were made, including the development of expert systems, natural language processing algorithms, and machine learning techniques. However, progress in AI was not always smooth, and there were several setbacks, including the so-called "AI winter" of the 1970s and 1980s, when funding for AI research declined due to a lack of tangible results. There are two broad types of AI: narrow or weak AI and general or strong AI. This type of AI is designed to perform a specific task or set of tasks, such as recognizing faces in an image or translating languages. These systems are trained on large datasets and use algorithms to identify patterns and make predictions. They do not possess general intelligence and cannot perform tasks outside of their area of expertise.

Discussion

This type of AI is designed to be capable of performing any intellectual task that a human can do. It is often referred to as artificial general intelligence (AGI). Currently, there are no AGI systems in existence, but researchers are working to develop such systems. Artificial intelligence has a wide range of applications, including AI is being used to develop diagnostic tools that can detect diseases such as cancer and Alzheimer's disease at an early stage, and to develop personalized treatment plans for patients. AI is being used to develop trading algorithms that can make predictions about the stock market and to detect fraudulent transactions. AI is being used to develop autonomous vehicles that can drive themselves without human intervention. AI is being used to optimize production processes and to identify defects in products. AI is being used to develop personalized learning systems that can adapt to the learning style and pace of individual students [2].

Artificial intelligence is expected to have a profound impact on society in the coming years, and it is likely to transform many industries and sectors. Some of the key areas where AI is expected to have a significant impact include AI is likely to play a major role in the development of personalized medicine, where treatments are tailored to individual patients based on their genetic makeup and medical history. Autonomous vehicles are expected to become increasingly common, which could lead to significant changes in the way people travel and the way cities are designed. AI is likely to continue to play an important role in finance, with the development of more advanced trading algorithms and the use of Artificial Intelligence (AI) has come a long way since its inception in the 1950s. With the help of computers and advanced algorithms, AI has been able to solve complex problems, learn from data, and make predictions that were once impossible. Computer Artificial Intelligence refers to the use of computers to simulate human intelligence and perform tasks that would typically require human intervention. In this article, we will discuss the history of computer AI, the different types of AI, and its applications in various industries [3].

The idea of creating a machine that can think and learn like a human being has been around for centuries. However, the first actual AI program was written in 1951 by Christopher Strachey, who created a checkers game-playing program on a Ferranti Mark I computer. The first AI conference was held at Dartmouth College in 1956, where researchers from different fields came together to discuss the possibilities of AI. This conference is often considered as the birth of AI. In the 1960s and 1970s, AI research was focused on expert systems, which are computer programs that simulate the decision-making abilities of a human expert in a specific domain. The 1980s saw the development of machine learning algorithms, which allowed computers to learn from data without being explicitly programmed. In the 1990s, AI research shifted towards neural networks, which are modeled after the structure of the human brain and can learn from data and improve over time. Today, AI is integrated into many aspects of our daily lives, from virtual assistants like Siri and Alexa to self-driving cars and personalized ads on social media [4,5].

There are three types of computer AI Narrow or Weak AI, General or Strong AI, and Super AI. This type of AI is designed to perform a specific task, such as image recognition or speech recognition. These systems are not capable of learning beyond their programmed functions, and their performance is limited to the task they were designed for. Examples of narrow AI include chatbots, recommendation systems, and fraud detection systems. This type of AI is designed to simulate human intelligence and perform any intellectual task that a human can do. General AI is capable of learning and improving over time, and it can apply its knowledge to different domains. However, this type of AI is still largely hypothetical and has not been achieved yet. This type of AI refers to a hypothetical future AI system that is more intelligent than humans in every aspect. This system would be capable of performing tasks that are currently impossible for humans, such as predicting the future or solving complex global problems.

AI has numerous applications in various industries, including healthcare, finance, retail, and manufacturing. Here are some of the ways in which AI is being used today AI is being used to analyze medical data and make predictions about patient outcomes. AI systems can also assist doctors in diagnosing diseases and creating personalized treatment plans for patients. AI is being used to detect fraud, predict market trends, and make investment decisions [6].

Conclusion

AI systems can analyze large amounts of financial data and identify patterns that are difficult for humans to detect. AI is being used to improve the customer experience by creating personalized recommendations, optimizing prices, and detecting fraud. AI systems can also be used to automate inventory management and supply chain operations. AI is being used to optimize production processes, reduce waste, and improve product quality. AI systems can analyze data from sensors and other sources to identify patterns and make predictions about equipment failures and maintenance needs.

Acknowledgement

None.

Conflict of Interest

There are no conflicts of interest by author.

Conflict of Interest

There are no conflicts of interest by author.

References

  1. Shekhar, Sudhanshu, Salil Karipott, Robert Guldberg and Keat Ong et al. "Magnetoelastic sensors for real‐time tracking of cell growth." Biotechnol Bioeng 118 (2021): 2380-2385.
  2. Google Scholar, Crossref, Indexed at

  3. Xiao, Xilin, Manli Guo, Qiangxiang Li and Qingyun Cai, et al. "In-situ monitoring of breast cancer cell (MCF-7) growth and quantification of the cytotoxicity of anticancer drugs fluorouracil and cisplatin." Biosens Bioelectron 24 (2008): 247-252.
  4. Google Scholar, Crossref, Indexed at

  5. Guntupalli, R., J. Hu, Ramji S. Lakshmanan and T. S. Huang, et al. "A magnetoelastic resonance biosensor immobilized with polyclonal antibody for the detection of Salmonella typhimurium." Biosens Bioelectron 22 (2007): 1474-1479.
  6. Google Scholar, Crossref, Indexed at

  7. Guntupalli, R., R. S. Lakshmanan, J. Hu and T. S. Huang, et al. "Rapid and sensitive magnetoelastic biosensors for the detection of Salmonella typhimurium in a mixed microbial population." J Microbiol Methods 70 (2007): 112-118.
  8. Google Scholar, Crossref, Indexed at

  9. Pang, Pengfei, Sijing Huang, Qingyun Cai and Shouzhuo Yao, et al. "Detection of Pseudomonas aeruginosa using a wireless magnetoelastic sensing device." Biosens Bioelectron 23 (2007): 295-299.
  10. Google Scholar, Crossref, Indexed at

  11. Meyers, Kaylee Marie and Keat Ghee Ong. "Magnetoelastic Materials for Monitoring and Controlling Cells and Tissues." Sustainability 13 (2021): 136-155.
  12. Google Scholar, Crossref, Indexed at

Google Scholar citation report
Citations: 343

International Journal of Sensor Networks and Data Communications received 343 citations as per Google Scholar report

International Journal of Sensor Networks and Data Communications peer review process verified at publons

Indexed In

 
arrow_upward arrow_upward