June 24, 2021

Artificial Intelligence: 62 years from computers to the era of the Smartphone – The Province

TheArtificial Intelligence (AI) is one of the technological developments on everyone's lips. We currently associate it with processes that take place in our connected mobile devices, but alsois present in hospitals, where it speeds up and specifies diagnoses and their treatments,or in transportation, helping in traffic management -among other sectors-.

While it seems to be a technology of the present, scientists and computer researchers have been working on itsince the fifties. It was preciselyin 1956, during the Dartmouth Conference (United States), when theArtificial Intelligence received its name-completed by John McCarthy- and established as a discipline within the computer sciences.

A year earlier, in the proposal to celebrate the aforementioned event, McCarthy, along with three other scientists, already offered the basis for the new discipline, which would deepen the conjecture that "every aspect of learning or any other characteristic of intelligence can in principle to be described with such precision thatA machine can be manufactured to simulate it"

The proposal was already referring to an attempt to discover "how to make machines use language, (from abstractions and concepts),solve types of problems so far reserved for humansand improve themselves. "

Currently, and according to one of its definitions,AI is understood as the simulation of human intelligence processes by machines, especially computer systems, as they explain from Huawei, one of the companies that has focused more in this field in recent years. These processes include thelearning(the acquisition of information and rules for the use of information), thereasoning(using the rules to reach approximate or definitive conclusions) and theself-correction.

From programming to prediction

At the time that Artificial Intelligence was born, scientists developed themodern programming languages, of high level, that allowed to program in writing the instructions that a machine had to follow to execute a certain task.

These instructions are written by hand, and make uplines of symbols and grammar rules based on human language, but that machines can understand. Fortran, Lips and Cobol appeared in the 1950s, but today the most common arePhyton(Google),Java(eBay) orSwift(Apple), among others.

With the programming languages ​​are developedprograms('software') – or web pages – thatallow machines(the physical part or 'hardware')perform specific tasks.

Currently, Artificial Intelligence systems are based on what has been called machine learning or machine learning. That is to say,a machine-which can be a computer or a mobile-learn what you have to do without needing to write the instructions.

These machines 'learn' from banks with large amounts of data andthey use algorithms to detect patterns and act in a predictive way. Technologies such as voice recognition, translation programs such as Google Translator or Microsoft Translator, or search recommendations are examples of use of this 'machine learning'.

How artificial intelligence works

"Human intelligence works exactly like artificial intelligenceAccording to Huawei, the AI ​​replicates the process of making unconscious decisions, a process by which a person can take around 35,000 decisions per day, although he is aware of an average of only 92, according to the scientific consensus cited by the technology company.

An Artificial Intelligence learns from the large amount of data that people generate in their daily activity, but also from the people themselves. And based on the accumulated knowledge is able to make decisions.

For example, a self-contained vehicle does not know what it will find on the road, but resorts tomemory(data that you have provided) andsenses(information received in real time from multiple connected sensors), just like a human brain, to make decisions.

What differentiates Artificial Intelligence from human intelligence is that the first is "the coding and programming of a series of 'software' lines that allow life to be easier," adds the company, which tries to apply the advantages of this day-to-day technology of its users.

The era of 'Smartphones'

In recent years,the integration of Artificial Intelligence in smartphonesIt has changed the way we interact with the devices and the world around us, a change that materializes the Mate 20 family of Huawei, some mobile phones presented last October under the motto: 'More Inteligencia, more power'.

The Chinese manufacturer introduced for the first time in the familyMate 10a specific processor for Artificial Intelligence.Kirin 970integrated a neural processing network (NPU) that allowed it "simulate human thinking", that is, was able to analyze the environment,recognize up to 2,000 images per minute, translate text in real time without connection to the Network and even make recommendations and adapt to the user.

One year later,Huawei has equipped its Mate 20 family with the Kirin 980 processor, with which the company advances in the evolution of the AI ​​applied to mobile phones. In this way,increases image recognition by 120 percent, being able to understand the user's usage habits and, consequently,predict which is the most efficient way to useand optimize the performance of the terminal.

Also, in this terminal it offers an improved photographic experience, thanks -among other things- to the 'Multi-pass' noise reduction solution, which uses the AI ​​to eliminate noise accurately without erasing the details of the image. that makes photographs taken in low light conditions improve their quality.

Artificial Intelligence promises to revolutionize our day to day andthe 'smartphone' is key in this transition.The efficient management of telephone resources is already improving our user experience, while machine learning will take the AI ​​to another level.


Source link