A branch of computer science that copies human intelligence in a machine, so that machines can perform activities that typically require human intelligence is Artificial Intelligence. Functions performed by AI systems can be - planning, learning, reasoning, problem-solving, and decision making.
AI systems are general algorithms, which use techniques such as machine learning, deep learning, and rules. Machine learning algorithms give the computer data to AI systems, using statistics to enable AI systems to learn. Through machine learning, AI systems get eventually better at tasks, without having to be specifically programmed.
AI technologies are divided using their capacity to copy human intelligence. Real-world applications and the theory of mind are some of the technologies they use.
With the help of these characteristics, all AI systems fall into one of three types:
- Artificial general intelligence is one with human capabilities;
- Artificial narrow intelligence one with a narrow range of abilities;
- Artificial superintelligence is one that is more capable than a human.
Artificial General Intelligence
It’s also called Deep Artificial intelligence.
It is the concept of a machine with general intelligence that copies human intelligence with the ability to learn more and apply its intelligence to solve any problem. This system can think, understand, and act in a way that is different from a human in any given situation.
Deep AI uses the theory of mind AI framework, which is the ability to recognize needs, emotions, beliefs and thought processes of other intelligent entitles. The theory of mind-level AI is about training machines to truly understand humans.
The challenge of achieving strong AI is not surprising when you consider that the human brain is the place for creating general intelligence. Researchers are struggling to copy the basic functions of basic activities due to the lack of comprehensive knowledge on the functionality of the human brain.
Mark Gubrud used the phrase "artificial general intelligence" in a study of the consequences of completely automated military production and operations as early as 1997. Shane Legg and Ben Goertzel reintroduced and popularised the phrase circa 2002. John Carmack, a video game programmer, and aerospace engineer stated plans to study AGI in 2019.
Informally, the most difficult issues for computers are referred to as "AI-complete" or "AI-hard", suggesting that solving them requires the broad ability of human intellect, or strong AI, beyond the capabilities of a purpose-specific algorithm. Many experts doubt that AGI will ever be achievable, and many more wonder whether it would even be desirable if it were possible. To provide a few examples, Stephen Hawking predicted "Weak AI will run off on its own, re-designing itself at an ever-increasing pace. As a result, the human race, which is constrained by sluggish biological evolution, would be overtaken." As another example, the “Fujitsu-built K supercomputer required 40 minutes to mimic one second of brain activity, making it one of the fastest supercomputers in the world”. That's why AI will be tough to develop in the near future.
It is difficult to define artificial general intelligence. However, like with all people, AGI systems should possess a number of features:
- Common Sense
- Background Knowledge
- Transfer Learning
Experts that are optimistic believe AGI and ASI are conceivable, but it is difficult to determine how far we are from attaining these levels of AI.
Artificial Narrow Intelligence
It is also called Weak Artificial Intelligence
Out of all three types of artificial intelligence, ANI is the only one we have successfully realized to date. It is goal-oriented. It is designed to perform single tasks at a time like- facial recognition, voice assistants, driving a car, or searching the internet. Therefore it is very intelligent at completing the specific task it is programmed to do.
(Suggested blog: Machine learning algorithms)
But mind you they operate under a narrow set of conditions and limitations, which is why this type is commonly referred to as weak AI. It does not replicate human intelligence, it just copies human behavior based on a narrow range of characteristics and contexts.
NAI has experienced a lot of breakthroughs in the last 10 years due to achievements in machine learning and deep learning. For example, do you know AI systems today are used to diagnose cancer and other diseases with complete accuracy? Its intelligence comes from the use of natural language processing (NLP) to perform various functions. NLP is best in creating chatbots and other likewise technologies.
Narrow AI mostly has a limited memory but Reactive AI has no memory or data storage capabilities. A limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use previous data to inform decisions.
Symbolic AI and Machine learning are the two main types of narrow AI techniques that we have available today.
Symbolic Artificial Intelligence (AI): Most of the history of artificial intelligence was spent researching symbolic artificial intelligence, often known as "good old-fashioned AI" (GOFAI). As a result of symbolic AI, programmers are required to create rules that specify the behavior of an intelligent system in great detail. Symbolic AI is appropriate for situations when the environment is predictable and the rules are explicit. Despite the fact that symbolic AI has gone out of favor in recent years, the majority of apps we use today are rule-based.
(Also read: AI for Indian Healthcare System)
Machine Learning: It is a subfield of artificial intelligence that builds intelligent systems by analyzing instances. In order to "train" a machine learning system, a programmer must first construct a model and then provide it with many instances. As a result of this, a mathematical representation of the data is created that can be used for prediction and classification tasks by the machine learning algorithm. When a machine learning system is trained on thousands of bank transactions with their result (legal or fraudulent), it may predict whether or not a new bank transaction will be fraudulent
Some examples can be – Image recognition, Google Search, self-driving cars, email spam filters, etc.
Artificial Super Intelligence
Artificial superintelligence (ASI) is a type of hypothetical system that doesn't just understand human intelligence; here machines also become self-aware and exceed the capacity of human intelligence and ability.
The concept of artificial super Intelligence is - AI evolves itself to be so similar to human emotions and experiences, that it doesn't just understand them, it evokes emotions and desires of its own.
In addition to copying the intelligence of human beings, ASI theoretically exceeds better at everything we do like mathematics, science, sports, art, medicine, hobbies, emotional relationships, and everything. ASI definitely has a greater memory and a faster ability to process and analyze data. As a result, the decision-making and problem-solving capabilities of super-intelligent beings are far better than those of human beings.
Many hypotheses predict that artificial superintelligence will arrive sooner rather than later, quite apart from the advances. Artificial superintelligence might emerge in the 21st century, according to some experts, thanks to recent advances in artificial intelligence.
(Also read: Concept of Word Embeddings in NLP)
'The Unfinished Fable of Sparrows' is how Nick Bostrom explains the initials in his book Superintelligence The concept was that some sparrows wanted to have an owl as a pet that they could manage. The plan seemed great to everyone, except for one sparrow who questioned how they could ever manage an owl. As a result, this issue has been put on hold for the time being. Elon Musk is concerned about super-intelligent beings in the same way and believes that humans are the sparrows in Bostrom's metaphor, while the owl represents the future ASI. As with sparrows, the "control problem" appears to be worrying since we may only have one chance to address it if a problem occurs.
Whatever it takes to achieve a task poses a risk. Superintelligent AI would be most effective in achieving a given objective, but we'll have to make sure that the goal fulfillment is done in accordance with all the necessary regulations to be followed to preserve some kind of control
Such powerful machines have a lot of potentials, but it sadly has a number of unknown consequences. As self-aware super-intelligent beings, they are capable of ideas like self-preservation. And you all know its impact on humanity, our survival, and our way of life.
Future of AI – Robots! Are they going to take the world from us?
AI's continuous growth and powerful capabilities have made many people think of an AI takeover.
But guys the line between computer programs and Artificial intelligence is a little opaque. Replicating few narrow characteristics of human intelligence and behavior is a little easy compared to creating a machine body of human consciousness. and The quest for strong AI was long thought to be science fiction through the machine learning and deep learning indicate that we may need to be more realistic about the possibility of attaining artificial general intelligence in our whole life.
It's difficult to consider a future where machines are better than humans because humans make robots. But we cannot accurately predict all the impacts AI advancements have on our world, but the removal of things like disease and poverty is something like a gift along with it.