Tag: ai

Apple Music is 10 years out of date

radio_farm_family_ca1930_dbloc_saEric Schmidt the chairman of Google has written an op-ed piece for the BBC talking up artificial intelligence.

He said that much of Google’s focus as a company has shifted to artificial intelligence, with projects such as Google Now embodying the company’s aim of using a computer to work out what humans need before even they know.

However Schmidt said that Apple still has not got the hang of this with its Apple Music project.
Apple employs real DJs who curate what is heard, something that Schmidt thinks is so ten years ago.

He said: “A decade ago, to launch a digital music service, you probably would have enlisted a handful of elite taste-makers to pick the hottest new music.

“Today, you’re much better off building a smart system that can learn from the real world – what actual listeners are most likely to like next – and help you predict who and where the next Adele might be.

“As a bonus, it’s a much less elitist taste-making process – much more democratic – allowing everyone to discover the next big star through our own collective tastes and not through the individual preferences of a select few.”

To be fair to Apple, Google is not close to producing a music service that can guess what you want to hear. It is easier for old-farts like me who don’t listen to anything that has been created in the last ten years. Google AI just deletes popular beat combo artists like Justin Bieber, Miley Cyrus, Nicki Minaj, Lil’ Wayne, Taylor Swift, Selena Gomez, Rihanna and Kanye West.

Enterprises are thinking of artificial intelligence

Robby the Robot - Wikimedia CommonsThe market for artificial intelligence applications for large organisations will be worth $200 million this year but as much as $2 billion by 2020.

That’s the prediction of market research firm Trendforce, which said big vendors are snapping up smartups and personnel to bolster the development of such applications.

Giants like Apple, Google, IBM, Microsoft and Yahoo are in a race to be first to market with persuasive enterprise AI apps whereas in former days AI was a specialised research field.

Carlos Yu, a senior analyst at Trendforce, said that AI technologies have made great advances in the last few years. “Large enterprises have not only begun to acquire AI solutions, they also set set about improving their AI based services by combining them with cloud and big data technologies,” he said.

IBM appears to be the leader of the pack right now, but the other giants are using “deep learning” AI to improve their technologies.

Yu believes that AI will be adopted in major industries including manufacturing, automotive, internet, retailing and information engineering.

He said in the healthcare sector AI is being combined with wearable devices to both improve patient care and also help design new pharmaceuticals.

Robots pass self-awareness test

Robby the Robot - Wikimedia CommonsA researcher at the Ransselaer Polytechnic Institute in New York have built a robot that have passed the classic ‘wise men puzzle’ test of self-awareness.

The wise men puzzle is based on a story. A king is choosing a new advisor and gathers the three wisest people in the land. He promises the contest will be fair, then puts either a blue or white hat on each of their heads and tells them all that the first person to stand up and correctly deduce the colour of their own hat will become his new advisor.

Selmer Bringsjord set up three robots. Two were prevented from talking, so all three were asked which one was still able to speak. All attempt to say “I don’t know”, but only one succeeds – and when it hears its own voice, it understands that it was not silenced, saying “Sorry, I know now!”

For a robot to pass the test it has to listen to and understand the question, then hear its own voice saying “I don’t know” and recognise it as distinct from another robot’s voice, then connect that with the original question to conclude that it had not been silenced.

Details about how the robots passed, and what sort of programming was used has not been released yet. Bringsjord’s work will be presented at the RO-MAN conference in Japan, which runs from 31 August to 4 September 2015 so we will probably know then.

Nvidia plans more AI

Robby the Robot - Wikimedia CommonsNvidia is updating its Digits software for designing neural networks.

Digits version 2 comes with a graphical user interface, potentially making it accessible to programmers beyond the typical user base of academics and developers which specialise in AI.

Nvidia vice president of accelerated computing Ian Buck said that a the previous version could be controlled only through the command line, which required knowledge of specific text commands and forced the user to jump to another window to view the results.

Digits will now enable up to four processors to work together simultaneously to build a learning model. Because the models can run on multiple processors, Digits can build models up to four times as quickly compared to the first version.

Nvidia wants AI to take off because it requires heavy computational power where its GPUs can do rather well. Nvidia first released Digits as a way to cut out a lot of the menial work it takes to set up a deep learning system.

It does have users Yahoo, which found this new approach cut the time required to build a neural network for automatically tagging photos on its Flickr service from 16 days to 5 days.

Nvidia  updated some of its other software to make it more AI friendly including its CUDA (Compute Unified Device Architecture) parallel programming platform and application programming interface, which also now supports 16-bit floating point arithmetic.This helps developers cram more data into the system for modelling. The company updated its CUDA Deep Neural Network library of common routines to support 16 bit floating point operations too.

Artificial intelligence is good for jobs

EDSAC - Wikimedia CommonsA survey reveals that leaders in large enterprises think artificial intelligence (AI) is good for jobs and isn’t a force of darkness.

Narrative Science surveyed 200 CEOs, CTOs, and other senior directors across many Fortune 500 companies.

Eighty percent of the respondents believe that AI technologies create jobs and improve worker performance and efficiency.

Only 15 percent of people believed that AI meant jobs will go.

The report also surveyed how organisations combine AI with big data technology. And 31 percent said data visualisation apps are the most commonly used analytics technology.

59 percent of respondents that harness big data also use AI – 32 percent said voice recognition and response are the most widely used apps.

You may not be surprised to learn that 14 percent use AI to automate communications to their customers.

Tech heavyweights buy some artificial intelligence

Tech industry big hitters Elon Musk, Mark Zuckerberg, and Ashton Kutcher have decided that they are short on the intelligence that nature gave them and want to buy a bit more.

The three wrote a cheque for $40 million to invest in an outfit called Vicarious.

This is the second major cash injection in two years for the AI company, which is building  software that mirrors the computational capabilities of the human brain. Well not ours, of course, otherwise it would not work until at least 11AM without a major coffee or candy crush.

Vicarious intends to replicate the part of the brain that sees, controls the body, understands language, and does the adding up. It wants to translate the actions of the neocortex to create a computer that thinks, but does not need to waste any time eating or sleeping, having sex, going through puberty, watching reality TV, or writing to the Daily Mail.

Musk, Zuckerberg, and Kutcher could not be reached for comment; none are listed yet on the company’s website as investors and the source of the story is the Wall Street Journal.

Vicarious is rather secretive and has already created software that interprets photos and videos as any human would, or so it is said.

A Facebook spokesman told the Journal that Zuckerberg’s Vicarious investment was made on a personal level, and will not affect the social network’s plans for development.

Earlier this year, Google stepped into the ring when it wrote a cheque for $400 million to buy  DeepMind which is a “cutting edge artificial intelligence company” with commercial applications in simulations, e-commerce, and games. 

Facebook AI can spot a face 97.25 percent of the time

Researchers from Facebook (tick: DeFacebook, Book of Faces) said that advances in the relatively new artificial-intelligence field known as deep learning have meant that they can spot a face 97.53 percent of the time.

If you show a person two unfamiliar photos of faces show the same person, a human being will get it right 97.53 percent of the time. New software developed by researchers at Facebook can score 97.25 percent on the same challenge.

This is a big step up from previous face-matching software, and is being touted as a success story for deep learning. his area of AI involves software that uses networks of simulated neurons to learn to recognise patterns in large amounts of data.

Yaniv Taigman, a member of Facebook’s AI team, said the error rate has been reduced by more than a quarter relative to earlier software that can take on the same task.

Dubbed DeepFace Facebook’s new software runs a routine called facial verification; it recognises that two images show the same face. It does not do facial recognition yet but some of the underlying techniques could be applied to that problem.

DeepFace remains purely a research project for now. Facebook released a research paper on the project last week, to get feedback from the research community.

DeepFace corrects the angle of a face so that the person in the picture faces forward, using a 3D model of an “average” forward-looking face.

The deep learning kicks in and a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face. 

Your AI will live on after you

A startup called Eterni.me is working out a way to create an AI version of you based on your internet interactions.

The idea is that you can create a persona which can let you chat, see, and interact with the digitised dearly departed. It means that the ancient idea of living with the dead spirits of your family will be a reality.

Marius Ursache, the startup’s chief executive said that the idea of the technology is to allow people to communicate with their lost loved one just like we chat with our living.

The main difficulty in creating an AI is that it is based on private data which is collected over years.

Rather than talking to the dead person, you are really communicating with the vast amounts of information people generate throughout their life, and allowing others to make sense of it.

In the good old days the dead left journals and diaries, private personal narratives that provide this kind of connection.

These days we generate so much more information, unfiltered GChat, GMail, and Facebook archives are almost too much to make sense. And so enter the idea of an AI based avatar to communicate with it.

Eterni.me claims it will “launch soon,” but its prototype is still primitive and builds off existing adaptive algorithms. The idea could be years away from being properly released.

However, the team also insists the elements are more or less in place as all you need are email logs, location data and the necessary tools to synthesise them. 

IBM puts Watson on the cloud

IBM is planning to put its Watson supercomputer onto the cloud with the idea of growing software that take advantage of the system’s artificial intelligence capabilities.

Watson was derived from IBM’s DeepQA project, drew worldwide attention in 2011 after it soundly defeated humans on the Jeopardy! telly show.

Since then Biggish Blue has been using Watson in areas like health care, but now the company is ready to share Watson with the broader world.

Rob High, an IBM fellow who serves as CTO of Watson, told IT World that  the technology was stable enough to support an ecosystem. IBM thinks it has something special and it should not be held back.

Watson is also a smaller piece of hardware. A basic Watson configuration has been between 16 and 32 cores with 256GB of RAM, according to High. If Watson needs a big think, IBM can chain these smaller Watson boxes together as needed for greater scale.

IBM is working with a handful of partners on the Watson cloud service, and each is developing specialized applications.

An outfit called Fluid is creating a Watson-powered program for retail which is designed to help buyers make better purchases.

The Watson cloud will include a development toolkit, access to Watson’s API, educational material and an application marketplace. Big Blue wants to work with venture capitalists to find startups that want to build software on Watson. 

All music sounds the same these days

A computer analysis of pop music throughout the ages shows that most music that the kids of today are listening to is identical.

This is confirmation that what your gran says, about those modern popular beat combo artists is true. They are getting worse.

According to Nature, a researcher at the Artificial Intelligence Research Institute called Joan Serrà has found that music has become louder and more similar over the over the decades.

He was studying music using AI theory that music, like language, can evolve over time and is pulled in different directions by opposing forces.

Dr Serrà’s team sifted through the Million Song Dataset, run jointly by Columbia University, in New York, and the Echo Nest, an American company, which contains beat-by-beat data on a million Western songs from a variety of popular genres.

The team looked at pitch, timbre and loudness, which were available for nearly half a million songs released from 1955 to 2010.

Modern music uses the same chords as music from the 1950s and most melodies are composed of the ten most popular chords.

But they also follow Zipf’s law which is applied to written texts. This shows that the most common word occurs roughly twice as often as the second most common.

The only thing that has really changed is how the chords are spliced into the tune. In the 1950s many of the less common chords would be used close together. More recently, they have tended to be separated by more pedestrian chords.

Timbre, lent by instrument types and recording techniques, similarly shows signs of narrowing. This peaked in the 1960s thanks to more experimentation with electric guitar sounds.

But while the current music is all the same, at least it is louder. Songs today are on average nine decibels louder than half a century ago.

This is because record mixers use loudness to catch radio listeners’ attention.

What is scary is that as music becomes more similar it is getting harder for computers to tell them apart. Sorting them into genres using timbre measurements is not enough.