Tag Archives: artificial intelligence

‘Anti-Turing test’ puts Facebook M ‘AI’ in doubt

A software engineer has detailed his quest to develop an ‘anti-Turing test‘ to prove that Facebook‘s personal assistant service ‘M’ is powered exclusively by humans, rather than AI as the company claims.

Facebook has touted M as a comprehensive personal assistant to rival Apple’s Siri and Microsoft’s Cortana — with the caveat that M is powered by humans, not simply AI. But now Arik Sosman, software engineer at BitGo, has taken to Medium to claim that M’s artificially intelligent element is smaller than advertised.

“M’s capabilities far exceed those of any competing AI,” Sosman writes. “Where some AIs would be hard-pressed to tell you the weather conditions for more than one location, M will tell you the forecast for every point on your route, and also provide you with convenient gas station suggestions, account for traffic, and provide you with options for food and entertainment at your destination.”

“When communicating with M, it insists it’s an AI,” he continues. “And that it lives right inside Messenger. But it’s non-instantaneous, and the sheer unlimited complexity of tasks it can handle suggest otherwise.”

Sosman used a number of examples to illustrate his thesis — which he called “much harder” than a traditional Turing test as “it’s much easier for humans to pretend to be an AI than for an AI to pretend to be human”.

Sosman asked M to perform a set of complicated tasks that “no other AI could pull off” — asking it for directions and then altering his request slightly. The AI was able to perform his request — but not without a few spelling mistakes and other errors that Sosman believes proves M is powered by humans. And when M called a landline, it was operated by a distinctly human woman.

“Thus, here we are. We have definitive prove that M is powered by humans,” wrote Sosman. “The next question is: Is it only humans, or is there at least some AI-driven component behind it?”

Facebook insists that M is powered by AI, but that it is trained and supervised by human beings — thus explaining the human errors experienced by Sosman.

The personal assistant service is part of an ambitious long-term project within the Messenger team to build a complex AI system that can complete tasks without human supervision. It is expected that M will become more and more automated over time.

Tagged , ,

Google releases its artificial intelligence software into the wild

Google is open-sourcing its machine learning system, TensorFlow, in the hope that it will accelerate research into artificial intelligence

Google has announced that it is releasing its artificial intelligence software into the wild, allowing third-party developers to contribute to its evolution.

Artificial intelligence – or what Google describes as “machine learning” – is making computers and gadgets smarter every day.

From image recognition to voice translation and noise cancellation, Google uses machine learning in many of its products, and has pumped a huge amount of its research and development budget into improving these systems.

Earlier this year, for example, Google engineers released the bizarre results of an artificial intelligence experiment, which saw photos interpreted and edited by the company’s “neural network”, which has been trained to detect faces and other patterns in images.

Google TreeOne of the images thrown up by Google’s neural network  Photo: Google

The latest iteration of its machine learning system is known as TensorFlow, which Google claims is faster, smarter and more flexible than its predecessor, DistBelief, which Google used to demonstrate that concepts like “cat” could be learned from unlabeled YouTube images.

“We use TensorFlow for everything from speech recognition in the Google app, to Smart Reply in Inbox, to search in Google Photos,” said Sundar Pichai, chief executive of Google, in a blog post. “It’s a highly scalable machine learning system – it can run on a single smartphone or across thousands of computers in data centres.”

However, even with all the progress Google has made with machine learning, it admits that it could still work much better.

Computers today still can’t do what a four-year-old can do effortlessly, like knowing the name of a dinosaur after seeing only a couple examples, or understanding that “I saw the Grand Canyon flying to Chicago” doesn’t mean the canyon is hurtling over the city.

This is why the company is “open-sourcing” the system, allowing third-party developers to access the raw computer code, adapt it, and start using it in their own applications.

“We’ve seen firsthand what TensorFlow can do, and we think it could make an even bigger impact outside Google. So today we’re also open-sourcing TensorFlow,” said Mr Pichai.

“We hope this will let the machine learning community – everyone from academic researchers, to engineers, to hobbyists – exchange ideas much more quickly, through working code rather than just research papers. And that, in turn, will accelerate research on machine learning, in the end making technology work better for everyone.”

He added that TensorFlow may be useful wherever researchers are trying to make sense of very complex data, from protein folding to crunching astronomy data.

The news comes as new research released by online marketing technology company Rocket Fuel, reveals that almost twice as many people believe artificial intelligence can solve big world problems compared to those who think it is a threat to humanity.

Stephen Hawking has famously been quoted as saying that the rise of artificial intelligence could see the human race become extinct, warning that technology will eventually ”supersede” humanity, as it develops faster than biological evolution.

However, the research reveals that only 21 per cent of Britons see artificial intelligence as a threat or are scared by it, while 42 per cent are excited or think it can solve big world problems.

Meanwhile, despite reports that thousands of British jobs have already beenreplaced by machines, only 9 per cent of people believe that artificial intelligence will threaten their job, while 10 per cent think it will enhance it.

Tagged , , , ,

Google and Facebook are battling to create the ultimate virtual assistant

The war of the AI virtual assistants has been heating up as of late. It’s no secret that Google, Facebook, and Microsoft have all been working feverishly towards the next big thing in artificial intelligence – a virtual assistant that can wave her binary wand and handle all the tedious minutia that has come to characterize living in the digital era.

From flight schedules to email pile ups, the human experience is increasingly subject to data overload. Our agrarian ancestors typically communicated with just a handful of people in a given day, and it was under these largely tribal circumstances that our brains developed their social circuitry. As the raft of information we are responsible for increases exponentially, it’s no exaggeration to say we humans are quickly running up against the limits of our information processing capacity. Enter the AI virtual assistant.

Two press releases this week are indicative of the different ways two of the front runners in the field, Google and Facebook, are attempting to create the virtual assistant of the future. With relatively little fanfare, Google took the lid off a new feature of the Gmail inbox, called Smart Reply. The aim of Smart Reply is to generate intelligent automated responses to common emails we receive.

Ever since the “Hummingbird” update to Google’s search algorithm back in 2013, an update which essentially allows Google search to understand the meaning behind a user’s query rather than treating it as a blind string of words, Google has been pushing artificial intelligence in higher and higher levels of semantic understanding. No doubt that much of this progress owes to the hire of Ray Kurzweil, one of the early wizards of artificial intelligence and high priest of the singularity movement, whose pioneering work in AI centered on speech recognition.

Google’s approach to artificial intelligence, and hence its virtual assistant, has largely focused on understanding human speech and its meaning. This is only natural, since it is an area directly related to its Google business platform: internet search. Somewhere along the way Google must have realized that every email we receive can be treated as a search query. We typically write to someone because we seek some form of information from them, in essence we are querying their personal storehouse of information. As Google increasingly has access to that personal storehouse of information, it becomes capable of responding to emails just like it would to a search query.

While automated responses to emails may ease some of the mental anxiety we are exposed to on a daily basis, it raises some disturbing philosophical questions. At what point have we ceded too much of our decision making to a machine intelligence, and what happens when the automated reply starts responding to messages what were generated with automated reply? Will entire conversations take place between computers with a minimum of human input?

Facebook software has learned to accurately predict whether precariously balanced virtual blocks will fall down.

Facebook, on the other hand, has taken a different track in regards to developing its virtual assistant. While it is no less interested in natural language processing, a much larger part of its business model is dependent on imagery and visual data, which users upload in ever-increasing droves. It’s no surprise than that Facebook is devoting massive resources to making advances in the field of artificial intelligence related to image processing.

Two developments this week point to how Facebook’s virtual assistant might benefit from enhanced visual cognition. One recent publication relates to the ability of their AI to make physics-based inferences about a visual scene, for instance understanding when a stack of blocks pictured in a photograph is likely to fall over. The other advance relates to scene recognition and developing algorithms that can describe what is taking place in a photo.

As I wrote in a prior ExtremeTech post, scene recognition will likely be essential to enabling robots to participate in more meaningful ways as nurses, caregivers, and domestic assistants. Understanding the visual world has proven every bit as challenging for artificial intelligence as understanding human speech. But as both Google and Facebook make major advances in these arenas, it shouldn’t be long before truly startling and uncanny AI virtual assistants begin appearing in our day-to-day lives.

Tagged , , , , ,

Facebook is building artificial intelligence to finally beat humans at Go

Facebook is now tackling a problem that has evaded computer scientists for decades: how to build software that can beat humans at Go, the 2,500-year-old strategy board game, according to a report today from Wired. Because of Go’s structure — you place black or white stones at the intersection of lines on a 19-by-19 grid — the game has more possible permutations than chess, despite its simple ruleset. The number of possible arrangements makes it difficult to design systems that can look far enough into the future to adequately assess a good play in the way humans can.

“We’re pretty sure the best [human] players end up looking at visual patterns, looking at the visuals of the board to help them understand what are good and bad configurations in an intuitive way,” Facebook chief technology officer Mike Schroepfer said. “So, we’ve taken some of the basics of game-playing AI and attached a visual system to it, so that we’re using the patterns on the board—a visual recognition] system—to tune the possible moves the system can make.”

SOFTWARE THAT CAN PLAY GO BY MIMICKING THE HUMAN BRAIN

The project is part of Facebook’s broader efforts in so-called deep learning. That subfield of artificial intelligence is founded on the idea that replicating the way the human brain works can unlock statistical and probabilistic capabilities far beyond the capacity of modern-day computers. Facebook wants to advance its deep learning techniques for wide-ranging uses within its social network. For instance, Facebook is building a version of its website for the visually impaired that will use natural language processing to take audio input from users — “what object is the person in the photo holding?” — analyze it, and respond with relevant information. Facebook’s virtual assistant, M, will also come to rely on this type of technology to analyze and learn from users’ requests and respond in a way only humans could.

Tagged , , ,

‘​Human’ robot Pepper proves popular again and sells out in less than a minute in Japan

Pepper, one of the world’s first personal robots to understand human emotion, has flown off shelves in Japan

Japan’s first life-size personal robot with the ability to understand feelings has again sold out within a minute of going on sale.

‘Pepper’, which can live autonomously in a person’s home, costs 198,000 Japanese yen (£1,070) and has the appearance of Casper the Friendly Ghost in 3D.

The latest batch of 1,000 Peppers bots sold out after they were put up for sale online at 10am on Saturday, reported Taiwanese newspaper The China Post on Sunday.

The child-size automaton was developed by Japanese telecom giant SoftBank and Taiwanese contract manufacturer, Hon Hai Precision Industry.

Pepper, an emotional Robot, greets conference attendees during the Wall Street Journal Digital Live (WSJDLive) conference at the Montage hotel in Laguna Beach, California October 20, 2015

The makers called Pepper a “social companion for humans”, claiming that the androgynous house-bot is the first of its kind to respond to human “emotional signifiers”, such as, laughing or frowning.

The first batch of Peppers hit the market in June. Since then, four batches have gone on sale. The latest model boasts new refinements, including the ability to memorise and store data on human responses by using cloud technology-based artificial intelligence.

The companies’ production line is based at the Hon Hai’s factory, reported The China Post.

Videos of Pepper have circulated on YouTube, in which the nearly 4ft robot responds to questions and commands in Japanese, in a high pitched squeaky voice.

On the basis of these studies, technology experts noted that Pepper is not the best listener but has a distinctly cheeky side.

One of the robot’s favourite questions is: “So you are very chic. Are you a model?”.

In one conversation, when asked how old it was, it replied, “In human years, I don’t know how old I am but as I robot I was made in 2014.”

Japan is leading the world market for human-like personal robots. Previous models on sale include, SoftBank’s NAO programmable robot, smaller than Pepper at 29in high; Sony’s AIBO robotic dog; and Honda’s ASIMO robot that can run and climb stairs.

People take pictures with humanoid robot 'Pepper', which is jointly developped by Japan's mobile carrier SoftBank and French humanoid robot maker Alderbaran, at a showroom of SoftBank in Tokyo, on June 6, 2014.

Tagged , , , ,

Apple Hires An Artificial Intelligence Expert From Nvidia. Is He Going to Work On Self-Driving Cars?

In what may be another sign that Apple is getting serious about cars, the company has hired Jonathan Cohen, the director of deep learning for chip-maker Nvidia. Nvidia is best-known for its graphics products used for computer games, but it has recently been pushing into the world of autonomous vehicles.

Cohen reported the change on his LinkedIn page.

Deep learning, as Re/code has documented, is a branch of artificial intelligence prized at tech companies for its ability to train computers to process patterns in large reams of visual data. Lately, Nvidia has used the technique for cars. The company sells its chips — graphic processing units, or GPUs — to carmakers, which use it to power cameras and radar that enable vehicles to drive autonomously.

Apple is building up a sizable internal operation around cars. Last week, CEO Tim Cook said that a “massive change” is coming to the industry.

“This is a big hire for Apple,” Chris Nicholson, co-founder of deep learning startup Skymind.io. “Nvidia’s GPUs are being used to power auto-pilot systems in cars, so the implications are pretty clear.”

We reached out to Apple and Nvidia for comment, and will update if we hear back.

Nvidia primarily provides computing hardware for the gaming industry, but it has ramped up its automotive division. In its most recent fiscal year, the company said its automotive unit reached $183 million in revenue. Cohen has run its deep learning group since 2008.

Apple currently uses deep learning tech for things like its maps and voice recognition with Siri. Cohen’s new LinkedIn bio doesn’t spell out what he’s up to. “I build software”, it reads.

Here he is at this year’s Consumer Electronics Show discussing how Nvidia’s tech is used in self-driving cars.

Tagged , , ,

Cancer drug development time halved thanks to artificial intelligence

Artificial intelligence has halved the time it has taken to bring a cancer combatting drug to market, start-up claims

A cancer-fighting drug is on target to be brought to market in half the expected time thanks to the use of artificial inteligence in testing, a start up has claimed.

Berg Health, a pharmaceutical startup founded in 2008 with Silicon Valley venture capital backing, said it expected the drug to go on sale within three years, marking seven years in development compared to the general 14.

Healthy cells feed on glucose in the body and die off, in a process known as cell death, when their usefulness draws to a close. But in some circumstances the mitochondria – the parts of the cell that provide its energy – malfunction and metabolise lactic acid instead of glucose, turning off their built-in cell death function at the same time.

The cell can then becomes cancerous and a tumour grows. Berg’s drug, BPM31510, will reactivate the mitochondria, restarting the metabolising of glucose as normal and reinstituting cell death, so the body can harmlessly pass the problem cells out of the body.

Berg Health’s team used a specialised form of artificial intelligence to compare samples taken from patients with the most aggressive strains of cancer, including pancreatic, bladder and brain, with those from non-cancerous individuals. The technology highlighted disparities between the corresponding biological profiles, selecting those it predicted would respond best to the drug.

“We’re looking at 14 trillion data points in a single tissue sample. We can’t humanly process that,” said Niven Narain, a clinical oncologist and Berg co-founder. “Because we’re taking this data-driven approach we need a supercomputer capability.

Code vortex

“We use them for mathematics in a big data analytic platform, so it can collate that data into various categories: healthy population for women, for men, disease candidates etc, and it’s able to take these slices in time and integrate them so that we’re able to see where it’s gone wrong and develop drugs based on that information,” Mr Narain said.

Berg expects to begin phase two trials of the drug next January, meaning it has already been proven to be effective on animal or cell culture tests and is safe to continue investigating in humans.

Mr Narain said it usually takes $2.6bn (£1.7bn) and 12 to 14 years to get a drug to market, and that the trial metrics within four and a half years worth of development indicated the time it takes to create a drug can be cut by at least 50 per cent. This will also translate into less expenditure, he claimed.

“I don’t think we’re going to spend $1.3bn to produce our first drug, so the cost is cut by at least 50 per cent too,” he added.

“‘There’s a lot of trial and error in the old model so a lot of those costs are due to the failure of really expensive clinical trials. We’re able to be more predictive and effective… and that’s going to cut hundreds of millions of dollars off the cost.’

Tagged , , , , ,

The search for a thinking machine

By 2050 some experts believe that machines will have reached human level intelligence.

Thanks, in part, to a new era of machine learning, computer are already learning from raw data in the same way as the human infant learns from the world around her.

It means we are getting machines that can, for example, teach themselves how to play computer games and get incredibly good at them (work ongoing at Google’s DeepMind) and devices that can start to communicate in human-like speech, such as voice assistants on smartphones.

Computers are beginning to understand the world outside of bits and bytes.

Fei-Fei LiImage copyrightTED
Image captionFei-Fei Li wants to create seeing machines that can help improve our lives

Fei-Fei Li has spent the last 15 years teaching computers how to see.

First as a PhD student and latterly as director of the computer vision lab at Stanford University, she has pursued the painstakingly difficult goal with an aim of ultimately creating the electronic eyes for robots and machines to see and, more importantly, understand their environment.

Half of all human brainpower goes into visual processing even though it is something we all do without apparent effort.

“No one tells a child how to see, especially in the early years. They learn this through real-world experiences and examples,” said Ms Li in a talk at the 2015 Technology, Entertainment and Design (Ted) conference.

“If you consider a child’s eyes as a pair of biological cameras, they take one picture about every 200 milliseconds, the average time an eye movement is made. So by age three, a child would have seen hundreds of millions of pictures of the real world. That’s a lot of training examples,” she added.

She decided to teach computers in a similar way.

“Instead of focusing solely on better and better algorithms, my insight was to give the algorithms the kind of training data that a child is given through experiences in both quantity and quality.”


Intelligent Machines graphic

Back in 2007, Ms Li and a colleague set about the mammoth task of sorting and labelling a billion diverse and random images from the internet to offer examples of the real world for the computer – the theory being that if the machine saw enough pictures of something, a cat for example, it would be able to recognise it in real life.

They used crowdsourcing platforms such as Amazon’s Mechanical Turk, calling on 50,000 workers from 167 countries to help label millions of random images of cats, planes and people.

Eventually they built ImageNet – a database of 15 million images across 22,000 classes of objects organised by everyday English words.

It has become an invaluable resource used across the world by research scientists attempting to give computers vision.

Each year Stanford runs a competition, inviting the likes of Google, Microsoft and Chinese tech giant Baidu to test how well their systems can perform using ImageNet. In the last few years they have got remarkably good at recognising images – with around a 5% error rate.

To teach the computer to recognise images, Ms Li and her team used neural networks, computer programs assembled from artificial brain cells that learn and behave in a remarkably similar way to human brains.

A neural network dedicated to interpreting pictures has anything from a few dozen to hundreds, thousands, or even millions of artificial neurons arranged in a series of layers.

Each layer will recognise different elements of the picture – one will learn that there are pixels in the picture, another layer will recognise differences in the colours, a third layer will determine its shape and so on.

By the time it gets to the top layer – and today’s neural networks can contain up to 30 layers – it can make a pretty good guess at identifying the image.

Pictures from StanfordImage copyrightStanford University
Image captionSome of the pictures the Stanford computers labelled

At Stanford, the image-reading machine now writes pretty accurate captions (see examples above) for a whole range of images although it does still get things wrong – so for instance a picture of a baby holding a toothbrush was wrongly labelled “a young boy is holding a baseball bat”.

Despite a decade of hard work, it still only has the visual intelligence level of a three-year-old, said Prof Li.

And, unlike a toddler, it doesn’t yet understand context.

“So far, we have taught the computer to see objects or even tell us a simple story when seeing a picture,” Prof Li said.

But when she asks it to assess a picture of her own son at a family celebration the machine labels it simply: “Boy standing next to a cake”.

Baby with a toothbrushImage copyrightStanford Univeristy
Image captionThe computer doesn’t always get it right – labelling this: a young boy is holding a baseball bat.

She added: “What the computer doesn’t see is that this is a special Italian cake that’s only served during Easter time.”

That is the next step for the laboratory – to get machines to understand whole scenes, human behaviours and the relationships between objects.

The ultimate aim is to create “seeing” robots that can assist in surgical operations, search out and rescue people in disaster zones and generally improve our lives for the better, said Ms Li.

AI history

The work into visual learning at Stanford illustrates how complex just one aspect of creating a thinking machine can be and it comes on the back of 60 years of fitful progress in the field.

Back in 1950, pioneering computer scientist Alan Turing wrote a paper speculating about a thinking machine and the term “artificial intelligence” was coined in 1956 by Prof John McCarthy at a gathering of scientists in New Hampshire known as the Dartmouth Conference.

Alan TuringImage copyrightGetty Images
Image captionAlan Turing was one of the first to start thinking about the possiblities of AI

After some heady days and big developments in the 1950s and 60s, during which both the Stanford lab and one at the Massachusetts Institute of Technology were set up, it became clear that the task of creating a thinking machine was going to be a lot harder than originally thought.

There followed what was dubbed the AI winter – a period of academic dead-ends when funding for AI research dried up.

But, by the 1990s, the focus in the AI community shifted from a logic-based approach – which basically involved writing a whole lot of rules for computers to follow – to a statistical one, using huge datasets and asking computers to mine them to solve problems for themselves.

In the 2000s, faster processing power and the ready availability of vast amounts of data created a turning point for AI and the technology underpins many of the services we use today.

It allows Amazon to recommend books, Netflix to suggest movies and Google to offer up relevant search results. Smart little algorithms began trading on Wall Street – sometimes going further than they should, as in the 2010 Flash Crash when a rogue algorithm was blamed for wiping billions off the New York stock exchange.

It also provided the foundations for the voice assistants, such as Apple’s Siri and Microsoft’s Cortana, on smartphones.

At the moment such machines are learning rather than thinking and whether a machine can ever be programmed to think is debatable given that the nature of human thought has eluded philosophers and scientists for centuries.

And there will remain elements to the human mind – daydreaming for example – that machines will never replicate.

But increasingly they are evaluating their knowledge and improving it and most people would agree that AI is entering a new golden age where the machine brain is only going to get smarter.

AI TIMELINE

Thinking machine graphicImage copyrightThinkstock
  • 1951 – The first neural net machine SNARC was built and in the same year, Christopher Strachey wrote a checkers programme and Dietrich Prinz wrote one for chess.
  • 1957 – The General Problem Solver was invented by Allen Newell and Herbert Simon.
  • 1958 – AI pioneer John McCarthy came up with LISP, a programming language that allowed computers to operate on themselves.
  • 1960 – Research labs built at MIT with a $2.2m grant from the Advanced Reserch Projects Agency – later known as Darpa
  • 1960 – Stanford AI project founded by John McCarthy.
  • 1964 – Joseph Weizenbaum created the first chatbot Eliza, which could fool humans but repeated back what was said to her.
  • 1968 – Arthur C. Clarke and Stanley Kubrick immortalised Hal, that classic vision of a machine that would match or exceed human intelligence by 2001.
  • 1973 – A report on AI research in the UK formed the basis for the British government to discontinue support for AI in all but two universities.
  • 1979 – The Stanford Cart became the first computer-controlled autonomous vehicle when it circumnavigated the Stanford AI lab.
  • 1981 – Danny Hillis designed a machine that utilised parallel computing to bring new power to AI.
  • 1980s – Backpropogation algorithm allowed neural networks to start being able to learn from their mistakes.
  • 1985 – Aaron, an autonomous painting robot, was shown off.
  • 1997 – DeepBlue, IBM’s chess machine, beat then world champion Garry Kasparov.
  • 1999 – Sony launched the AIBO, one of the first artificially intelligent pet robots.
  • 2002 – The Roomba, an autonomous vacuum cleaner, was introduced.
  • 2011 – IBM’s Watson defeated champions from TV game show Jeopardy.
  • 2011 – Smartphones introduced natural language voice assistants – Siri, Google Now and Cortana.
  • 2014 – Stanford and Google revealed computers that could interpret images.
Tagged , , , , , ,

Intelligent Machines: What does Facebook want with AI?

These days study into artificial intelligence research is no longer the preserve of universities – the big technology firms are also keen to get involved.

Google, Facebook and others are busy opening AI labs and poaching some of the most talented university professors to head them up.

Prof Yann LeCun is a hugely influential force in the field of Deep Learning and is now director of AI research at Facebook.

He spoke to the BBC about what the social network is doing with the technology and why he thinks Elon Musk and Stephen Hawking are wrong in their predictions about AI destroying humanity and here are his thoughts.

What is artificial intelligence?

Yann LeCun
Image captionYann LeCun thinks fears about AI are overblown

It is the ability of a machine to do things that we deem intelligent behaviour for people or animals. Increasingly it has become the ability for machines to learn by themselves and improve their own performance.

We hear a lot about machines learning but are they really thinking?

The Thinker, sculpture by RodinImage copyrightGetty Images
Image captionCan a machine ever think in the way humans understand the activity?

The machines that we have at the moment are very primitive in a way. Some of them, to some extent, emulate the basic principles of how the brain works – they are not at all a carbon copy of brain circuits but they have a little bit of the same flavour.

They are very small by biological standards. The biggest neural networks that we simulate have in the order of a few million simulated neurons and a few billion synapses – which are the connections between neurons – and that would put them on par with very small animals, so nothing like what we would think as humans.

In that sense they are not thinking and we are still very far from building machines that can reason, plan, remember properly, have common sense and know how the world works.

But what they can do is recognise objects and images with what seems to be superhuman performance at times and they can do a decent job at translating text from one language to another or recognising speech. So in that sense they do things that humans would consider an intelligent task.

Tagged , , ,

Cortana Cozies Up to Android Users

Microsoft on Monday released a public beta of its Cortana personal assistant for Android.

The Cortana app can do most of the things Cortana does on Windows: set and get reminders; search the Web on the go; track information such as flight details; and begin and complete tasks across all a user’s devices.

Cortana will find answers on the Web in response to spoken or typed requests, including things like sports scores, movie times, where to find a particular kind of restaurant, and other factual information. Cortana also supports voice texting.

However, Android device users can’t toggle settings, open apps, or launch Cortana hands-free simply by saying “Hey Cortana,” as they can with Windows.

As is typical with betas, Microsoft is working to improve the Cortana for Android experience and will incorporate user feedback, the company said.

Whys and Wherefores

There is already a personal digital assistant available to Android users, of course — Google Now. So how does Microsoft expect to attract converts to Cortana?

“This is a dilemma similar to what [Microsoft faced in] the phone market,” remarked Tuong Nguyen, a principal analyst at Gartner.

“You essentially have a product out that is, at best, the same as what users already currently have,” he told TechNewsWorld, “so, although it’s good that Cortana will be made available across platforms, there’s currently nothing to compel the average user to switch.”

It’s possible that Microsoft is positioning Cortana for Android as a beachhead, since mobile continues to be one of its major business focuses, despite its having written off its mobile handset business, speculated Susan Schreiner, an analyst at C4 Trends.

Further, “this is no different from Microsoft’s usual marketing strategy, releasing betas first,” she told TechNewsWorld.

Nadella’s Multiplatform Strategy

Making Cortana available for Android and iOS “is the only way Microsoft actually could advance its multiplatform strategy, with its Windows Phone platform holding a market share of under 3 percent,” said Kevin Burden, a research vice president at 451 Research.

“Voice recognition is part of Cortana, that’s how users will interact with it,” he told TechNewsWorld. “It just doesn’t have the wake-up capabilities which require the phone to always be listening.”

Hands-free capabilities will come to Cortana “when the hardware catches up to its requirement to always be listening,” Burden predicted.

Who Loves Cortana?

Given that Cortana for Android is still in beta and lacks several key features, who would give it a shot? Tech geeks — that’s who.

Twenty-four percent of Germans and 38 percent of Americans reported having used personal assistants on their smartphones in the previous three months, in a Gartner survey conducted in the two countries last year, Nguyen said. “The base of users who would try using Cortana on Android would be a subset of this.”

For Windows users looking for an integrated experience across their devices, an upgrade to Windows 10 with Cortana on their Android or iOS smartphone and tablet “is the beginning of an experience that will continue to improve as more of Microsoft’s services seep into Cortana’s back end,” Burden said.

Ultimately, “it all depends on what developers do with this,” Schreiner pointed out.

Microsoft hasn’t had much luck getting devs to create mobile apps so far, she said, so “this could be Microsoft’s way of testing the market.”

Tagged , , , , , , ,