Evolution of Artificial Intelligence by Self-taught Technology

7 months ago in AI
Evolution of Artificial Intelligence by Self-taught Technology

The more we go towards the future the AI gets smarter nowadays. Waze is the world’s largest community-based traffic and navigation app. And the interesting fact about this app is its artificial intelligence. It adapts to your route patterns and makes the best possible route to ensure you to reach the destination as fast as possible. Google Now and Apple Intelligence are predictive AI, these apps learn the users planning and thinking by email, search, browsing history, calendar entry etc and then it predicts the users thinking before he or she even think to ask the app.

 

These apps are just like the brain of human babies as they learn and adapt the new thing by watching and experiencing, as a recent – and increasingly influential – line of thinking in cognitive science contends called predictive processing, or more simply, the predictive brain, the basic idea is that rather than a passive receptacle for the outside world that’s waiting to process the raw data that comes in, the brain is an active, Bayesian machine. It is generating inferences about that world and shaping how we perceive it. Andy Clark, a professor of philosophy at the University of Edinburgh and author of Surfing Uncertainty: Prediction, Action and the Embodied Mind, uses the example of returning to his office and seeing a steaming cup of coffee he’d left on the table. Many of us might imagine that the process of seeing it is a bottom-up affair: it surges into view, like “an array of activated pixels”, and as the representation takes shape, we match it against stored mental models of what an object such as a cup of coffee looks like.Clark suggests another possibility. Upon re-entering his office, he writes, “My brain already commands a complex set of coffee-and-office involving expectations.”

 

This already complete top-down model sends a stream of predictions against the incoming sensory data, activating clusters of neurons to do the work of looking for that coffee. When errors are encountered, “incorrect downward-flowing ‘guesses’ yield error signals that propagate laterally and upwards and are used to leverage better guesses”. To perceive the world, Clark writes, “is to successfully predict our own sensory state. Clark also notes that the reason that we can’t tickle ourselves is akin to the problem of trying to tell yourself a joke.

 

However, how much ticklish you may be, however funny the joke may be, you have already deployed, as Clark puts it as, “a precise model of the mapping from our own motor commands to sensory (bodily) feedback.” As your body knows what to expect, neurons are suppressed and there’s not enough room for surprise – which is key to both being tickled and laughing at jokes. We can sometimes have a brain response to a missing auditory stimulus that is equal to the response of a stimulus that is present because we predicted it would be there (it is the same feeling or syndrome when we feel the vibration of our phone but it isn’t).

 

It is similar to the app that when you have a flight to catch at 10 am on the next week so, the app won’t go through all the dates of the calendar but it will just look at the appointment time and then start acting. Then it will create a model of traffic of that day in its memory(or we can call it the brain or AI) after that it will warn the user about the traffic or if it is a bad weather. Most of the robots in warehouses or factories are capable of predictive processing on their own when they retrieve objects they are trained to filter out extraneous information such as shelves, overhead lights, and other robots. Their neurons are programmed towards what they are expected to see. In the near future we might see that they will learn about their surroundings and cooperate with them like humans.

 

 

 

Comments


You may also like these posts.