Why AI Won’t Take Over The World Anytime Soon
Concerns about artificial intelligence systems becoming overly powerful or perhaps taking control of our future are widespread in a time when these systems play a significant role in both our everyday lives and our collective imagination. Though common in science fiction, these worries are not at all fulfilled in the actual world when one looks more closely at the level of AI technology today. This is why there isn’t an impending AI takeover.
Knowing Narrow AI: The Engine of Modern Technology
Most AI systems we come with on a daily basis are instances of “narrow AI.” These machines are experts in their fields; they can suggest your next Netflix film, plan your route to avoid traffic, or perform even more difficult jobs like creating photos or writings. Though they are capable of much more, they are nonetheless constrained by tight rules that are meant to help them flourish in a certain field.
This holds true for the generative AI tools as well, which are astounding us with their multimodal content creation capabilities. They can even write music and detect components in photos. They can also make articles. Though they do not actually “understand” the material they create or the environment around them, these sophisticated AIs are fundamentally still merely generating mathematical predictions based on enormous datasets.
Narrow AI functions inside a preset set of parameters and results. It is unable to reason for itself, acquire knowledge beyond what it has been taught, or grow sentient. As a result, even though these systems appear intelligent, their capabilities are nonetheless strictly limited. You may relax if you’re worried that your GPS will send you on a renegade quest to take over the entire planet at some point. Your navigation system isn’t planning to take over the world; instead, it’s just figuring out the quickest way to get where you’re going, without considering the wider ramifications of its calculations.
Artificial General Intelligence’s illusive aim
The idea of artificial general intelligence that is, a machine that can comprehend, acquire, and use information in a wide range of contexts much like a human remains a long way off. The most advanced AIs available today have trouble understanding human children’s nuances in speech or identifying things in a disorganized area.
Making the leap from limited AI to AGI involves fundamental advances in the way AI learns and perceives the environment, not just little steps toward progress. It is currently very difficult to create a machine that truly understands context or demonstrates common sense, and researchers are constantly working to unravel the fundamentals of cognition and machine learning.

The Constraints of Data Dependencies
Another reason is the ravenous hunger of modern AI systems, which need enormous volumes of data in order to learn and perform well. One of the main obstacles to the advancement of AI is its reliance on massive datasets. For even basic tasks, AI systems require hundreds or even millions of data points, in contrast to humans who can learn from a small number of instances or even from a single encounter. This discrepancy reveals a basic difference between the information processing capacities of humans and robots.
AI requires vast amounts of data that are also highly particular, yet these kinds of large-scale, high-quality datasets are basically non-existent in many disciplines. The use of AI is limited in certain domains, such as specialist medicine or uncommon occurrence regions, where the necessary data to train the system properly may be rare or non-existent.
Thus, it is more than doubtful that AI systems would grow on their own to become more intelligent than humans.
An Ordered Development
The framework supporting AI research is developing at the same time that it is evolving and permeates more aspects of our daily lives and businesses. As AI capabilities increase, this dual progression makes sure of it. The need for flexible regulatory frameworks grows along with AI technologies. The IT world is becoming more adept at putting safety and moral standards into practice. Nonetheless, to guarantee stable, secure, and regulated operations, these safeguards must keep up with the quick advancements in AI.
We can successfully foresee and manage potential dangers and unexpected consequences by proactively adjusting rules, safeguarding AI’s status as a formidable instrument for good development rather than a threat. In order to fully realize AI’s potential and stay clear of the traps portrayed in dystopian fictions, it is imperative that ethical and safe AI development remain a priority. AI exists to support and enhance human talents, not to take their place. Thus, the globe is still largely in human control at this time.

The reasons why AI won’t take over the world (anytime soon)
The world is changing because to AI. AI is taking our jobs, we say. These proverbs are becoming increasingly popular in this decade. They are somehow responsible for the disproportionate dread and resistance to AI.
The idea that AI would eventually overcome humans and take over the planet is ludicrous and unlikely to come true. ‘Why?’ You might inquire. Arai Noriko, a Japanese expert in mathematical logic and artificial intelligence, has the solution in her book Artificial intelligence versus. Children who can’t read textbooks (AI vs. 教科書が読めない子どもたち).
This book will dispel the myth and clarify why artificial intelligence is not and will not become more intelligent than humans in the near future. It also discusses the current state of AI capabilities, the industries in which jobs will be taken, and the skills necessary for survival.
The primary theme of the book has previously been summarized by Sertis here. It simply takes ten minutes or less to clear up your lifelong confusion. If you would want to talk about anything related to the book, please post a comment below. Now let’s get going!
There are two definitions to the term “artificial intelligence,” or “AI,” as it is generally used today.
Artificial Intelligence” is one definition that is taken literally. It refers to a machine that can replicate itself without human assistance and mimic or enhance human intellect. This has not yet occurred. Even the most sophisticated AI technology is still significantly less intelligent than a person.
AI as AI technology is the second interpretation. These are the technologies we are currently creating, such natural language processing, image recognition, and machine learning, which at some point should enable the creation of the perfect artificial intelligence with the aforementioned capabilities.
In summary, we mistakenly believe that the term “AI” refers to artificial intelligence in the first sense, while it really refers to technology in the second meaning. We make the fiction that AI is going to take over the world because we believe it to be more intelligent than humans, even if this doesn’t seem likely.
Arai Noriko has been working to dispel this myth and show people the real limits and capabilities of artificial intelligence. She created a scheme to obtain Tokyo institution, the top institution in Japan, to accept her artificial intelligence robot, entitled “Torobo.”
After ten years of work, Torobo was able to get a 75% on the history test and rank in the top 20% of all students in mathematics.
Torobo employed the technique of cross-referencing each answer using many keywords in order to complete the history test. It would look for those keywords online, pick the website where the terms show up the most, then check if it made the proper choice by comparing it to the material on the page. Torobo employed machine learning and deep learning to convert test questions into equations, which computer algebra was then used to answer.
English and Japanese, however, have not advanced much in those ten years. In these two topics, Torobo only receives scores below 50%. There are sections on grammar, vocabulary, conversation, and reading comprehension in the language tests. Torobo may be trained to parse the grammatical and vocabulary portions using data and rules. But the reading comprehension and conversation portions are above Torobo’s capabilities because they need knowledge and comprehension of common sense.
After learning about AI’s limits, you might be wondering how devices like Google and Siri that can respond to our queries function because AI is incapable of understanding humans or having common sense.
The best result is determined by selecting keywords from our query and analysing the data, is their response. If you ask Siri to locate “Italian restaurants with good reviews,” a ton of suggestions will appear. Next, try searching for “Italian restaurants with bad reviews.” You’ll probably receive results similar to the first search. This is a result of the machine’s ability to infer statistics and keywords. The findings are inaccurate because few individuals use the term “not good reviews.

How about translating text with Google Translate? It learns how to choose suitable phrases by utilizing a variety of statistical data and linguistic models. The computer makes a valiant effort to mimic human speech. They’re not aware of it. Users of Google Translate may provide the system suggestions for improved translations, and the algorithm learns from them. What about AI that creates music or writes articles? We are the ones that instruct children through a vast array of papers and songs. It then decides at random what to write by copying our originals.
Thus, also do image recognition, deep learning, and machine learning. All of these technologies derive their expertise from our training data. They commit what they see or hear to memory. Should we request that they identify what they haven’t seen? Big no is the response.
It’s possible that you’ve realized AI is limited to calculation and memorization. These three “languages” provide every capacity that modern AI is capable of: probability (the likelihood of throwing a 1 with one die is 1/6), statistics (prediction based on historical data), and logic (if A=B, then B=C). AI cannot do anything that cannot be computed. The simple expression “I love you” is incomprehensible to AI since love is not a notion that can be represented by numbers or images.
The human brain is an intricate organ. We spend every minute of our everyday lives using comprehension and common-sense logic. AI must mimic our brains in order to be as intelligent as humans. AI must be able to compute commonsense comprehension and reasoning in order to resemble our brain. Because there isn’t a precise formula or equation that can describe how we think, it is impossible. Mathematics and AI advancements haven’t been able to get past the common-sense barrier. AI won’t take over the world, at least not anytime soon, for this reason.
Now for the unfavourable news. While humans are still superior to AI in terms of comprehension and common-sense reasoning, AI completely outperforms AI in terms of computation. With his numerical prowess, Torobo can gain admission to 70% of Japanese institutions, including prestigious private ones.
Eighty percent of students are at the same level as Torobo, and just twenty percent of students score higher than Torobo. According to research, in order to cut costs and mistakes, positions involving computation and memorization, those primarily using statistics, and regular tasks like credit analysis and data entry may be replaced by AI.
How will we live if AI takes over our work is a crucial topic. It’s not too difficult. Simply go back and hone the fundamental abilities that AI lacks, like reading comprehension, which we previously disregarded. What sets us apart is our capacity for common sense understanding and reasoning. We also possess agility and inventiveness. Unlike artificial intelligence (AI), which is limited to the tasks that humans teach it to perform, we possess these attributes. It is incapable of comprehension or self-invention.
In light of this, AI may eventually replace humans in more than 50% of employment. To live, we must have a plan. We should begin by practicing comprehension to see whether we can grasp the issues and problems that society faces. Create new roles that innovate to address the issues. Create new goods and services to fulfil the demands of people. Through your work, convey the importance of compassion and understanding. Finally, remember to maintain your agility.
6 AI Limitations and the Reasons 2023 Will See It Not Quite Take Over
The fast-developing science of artificial intelligence (AI) has the potential to completely transform a wide range of aspects of our life. But even with all of its benefits, there are a few things to keep in mind about the technology’s limits. Among these include the absence of common sense, openness, inventiveness, emotion, safety, and ethical considerations.
These drawbacks may impair AI systems’ efficacy and performance as well as restrict their uses, particularly in domains like banking, healthcare, transportation, and decision-making. To completely realize AI’s potential, it is crucial to recognize and overcome these constraints.
Poor contextual awareness
AI systems are not as adept at comprehending human language and communication subtleties or context. Machines can recognize patterns and make predictions based on vast volumes of text data that they have been educated on. But they are unable to comprehend the finer points and complexities of human communication and language.
For instance, individuals might find it difficult to comprehend irony, figurative language, or sarcasm. Furthermore, they are unable to comprehend the context of words, which might result in mistakes or unexpected behaviour. Furthermore, idiomatic language and cultural allusions are difficult for AI to comprehend, yet with more training and exposure, they improve at handling this kind of language.
An absence of common sense
As of right now, AI systems are incapable of applying common sense logic to novel circumstances. They are unable to adapt their knowledge to novel settings since they can only predict outcomes and make choices based on the data they have been educated on. AI systems may make mistakes as a result of this lack of common sense, especially in unfamiliar contexts.
An AI system that has been taught to recognize items in photos, for instance, might not be able to identify an object that it has never seen before. In this case, human input will still be needed to feed the system the new object and program it for more experiences.
Furthermore, because it is limited to what it was trained for and is unable to comprehend the nuances of a job or notion, it may perform poorly when presented with a similar but slightly different assignment. This lack of common sense may restrict AI’s ability to solve problems, make decisions, and comprehend the outside world.
AI systems with bias have the potential to reinforce and magnify preexisting biases in the data they are trained on. A number of variables, including human error, sample bias, and social and historical contexts, can bring bias into data. An AI system trained on a dataset of predominantly male job candidates, for instance, is likely to be biased in Favor of males and produce fewer accurate predictions for women. Analogously, an AI system trained on a predominantly-black dataset of criminal defendants is likely to exhibit bias toward the black community and produce fewer accurate predictions for the white defendant population.
Insufficient originality
AI is still not creative, even with the recent controversy around its creative potential. Algorithms and mathematical models, the foundation of artificial intelligence (AI) systems, enable them to identify patterns and forecast outcomes from vast volumes of data. These systems, however, are unable to generate original notions or ideas.
AI cannot produce anything wholly original, such as a new artwork by a human artist or a new theory by a human scientist. Having said that, one may counter that since we are influenced by everything around us and by everything that has come before us, nothing that anybody makes is really original.
Absence of Feeling
Artificial intelligence (AI) systems are not emotional beings. This is where Chat CPT fell short.
What distinctions exist between the feelings of envy and jealousy in humans?
Although envy and jealousy are related emotions, they differ greatly in a few important ways.
The emotion of resentment or wrath against the accomplishments or possessions of another person is called jealousy. It is frequently aimed toward someone who the envious person believes to possess what they wish or need. A love partner, friend, or coworker who is succeeding in life or their career might set up jealousy.
On the other side, envy is the desire for something that another person possesses. It is frequently aimed toward someone who the jealous person thinks are possessing what they wish they had. Someone’s successes, position, or belongings might make you feel envious. In conclusion, envy is an emotion that develops when someone desires something they don’t have, but jealousy is an emotion that develops when someone feels like they are losing what they have.
This is a clearer explanation from dictionary.com that was probably authored by a human: Both envy and jealousy are feelings of longing for what someone else possesses, but jealousy is typically seen as being worse since it frequently accompanies animosity against the other person. Although envy is a negative emotion as well, typically not implying animosity, it is a mixed emotion of admiration and unhappiness. Envy is another distinction in that it can be a verb or a noun.
They are limited to processing data in a methodical and ordered manner. They do not feel emotions directly, but they are able to identify patterns in data, such tone of voice or facial expressions, that may suggest a particular mood. Put differently, artificial intelligence lacks awareness and emotional intelligence, meaning it cannot subjectively experience happiness, sadness, or anger.
Some academics are focused on creating artificial intelligence (AI) systems that can mimic emotions, such as those seen in chatbots or virtual assistants, to enhance their capacity to communicate with people. To be clear, this simulation is only a means of enhancing the naturalness or human-likeness of the encounter; it is not an exact replica of the genuine feeling.
Here’s an example of an irate person’s description of an M&S quick meal via Chat GPT:
M&S Bourguignon (Slow Cooked Beef)? More akin to M&S Slow Cooked Letdown! Sensitive? More like rough and chewy. British meat? It’s more like, who knows where from?