Two millionaires have each committed $10 million to save the world from the troubles of AI.
Reid Hoffman, the founder of LinkedIn, and eBay founder Pierre Omidyar’s non-profit are spending a fortune funding academic research and development aimed at keeping artificial intelligence systems ethical.
The fund received an additional $5 million from the Knight Foundation and two other $1 million donations from the William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group. The $27 million reserve is being anchored by MIT’s Media Lab and Harvard’s Berkman Klein Center for Internet and Society.
While the project acknowledges that AI has its uses they are worried that things can go wrong. The most critical challenge is making sure that machines are not trained to perpetuate and amplify the same human biases that plague society.
The money will pay for research to investigate how socially responsible artificially intelligent systems can be designed to, say, keep computer programs that are used to make decisions in fields like education, transportation and criminal justice accountable and fair.
The group wants to talk with the public about and foster understanding of the complexities of artificial intelligence. The two universities will form a governing body along with Hoffman and the Omidyar Network to distribute the funds.
Wharton School of Business at the University of Pennsylvania has warned that all the developed nations on earth will see job loss rates of up to 47 per cent within the next 25 years.
The statistic is based on a recent Oxford University study and includes blue and white collar jobs. So far, the loss has been restricted to the blue collar variety, particularly in manufacturing so no one has cared that much as this has been happening since the 1960s.
The new trend is not creating new jobs either. By 2034, just a few decades, mid-level jobs will be by and large obsolete.
So far the benefits have only gone to the ultra-wealthy, the top 1 per cent. This coming technological revolution is set to wipe out what looks to be the entire middle class.
Accountants, doctors, lawyers, teachers, bureaucrats, and financial analysts beware: your jobs are not safe. Soon computers will analyze and compare reams of data to make financial decisions or medical ones. There will be less of a chance of fraud or misdiagnosis, and the process will be more efficient. Not only are these folks in trouble, such a trend is likely to freeze salaries for those who remain employed, while income gaps only increase in size.
Unfortunately the report suggests that it is too late to turn Luddite and break up the machines. Governments will need to sort out some form of retraining, although it is not clear what the nasty fleshy pink lumps can do that robots can’t.
AMD is announcing a new series of Radeon-branded products today, targeted at machine intelligence and deep learning enterprise applications.
Dubbed the Radeon Instinct, the chip is a GPU-based solution for deep learning, inference and training. AMD has also issued a new free, open-source library and framework for GPU accelerators, dubbed MIOpen.
MIOpen is made for high-performance machine intelligence applications and is optimized for deep learning frameworks in AMD’s ROCm software suite.
The first products are the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and can manage 5.7 TFLOPS of peak FP16 performance when the wind is behind it and it is going downhill.
It also includes the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. AMD claims the MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W.
The Radeon Instinct MI25 accelerator uses AMD’s next-generation Vega GPU architecture and has a board power of approximately 300W. All the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow.
Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualisation.
Software King of the World has admitted that its Chinese flavoured AI chat bot will not talk about anything that the authorities behind the bamboo curtain don’t want them to talk about.
Xiaoice would not directly respond to questions surrounding topics deemed sensitive by the Chinese state including the Tiananmen Square massacre of 1989 or “Steamed Bun Xi,” a nickname of Chinese President Xi Jinping.
“Am I stupid? Once I answer you’d take a screengrab,” read one answer to a question that contained the words “topple the Communist Party.”
Mentioning Donald “Prince of Orange” Trump also drew an evasive response from the chat bot. “I don’t want to talk about it,” Xiaoice says. Fair enough who does?
Microsoft has admitted that there was some filtering around Xiaoice’s interaction.
“We are committed to creating the best experience for everyone chatting with Xiaoice,” a Microsoft spokesperson said. “With this in mind, we have implemented filtering on a range of topics.” The tech giant did not further elaborate to which specific topics the filtering applied.
Microsoft says that Xiaoice engages in conversations with over 40 million Chinese users on social media platform like Weibo and WeChat.
Chinese outfit LingLong has created an AI based assistant it has dubbed the DingDong which is making a sing song in the consumer electronics market.
The gear has a music library of three million songs, can take memos and share updates regarding news, traffic and weather in what the firm calls ‘cinema-like sound quality’
It speaks Cantonese and Mandarin, which means it can roll into the lucrative Chinese market and get a head start on its Western rivals.
It costs $118 and answers questions, gives directions and plays music in high quality 320Kbps format
The device comes in four colours: red for prosperity, white for purity, black for money and purple because it is pretty.
In the west, Amazon is the leaders in this space. It released its Echo in 2014 – smart speaker powered by Alexa. Users can ask Alexa to do a range of activities such as request an Uber or order their usually from Dominos – and there is more than three million units in the world.
Most DingDong owners use the technology as a music player, or as someone to talk to.
Samsung is to launch an artificial intelligence digital assistant service for its upcoming Galaxy S8 smartphone.
It had been expected. Samsung recently bought Viv Labs, a firm run by a co-creator of Apple Siri voice assistant. Samsung plans to integrate the outfit’s AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.
Samsung wants its Galaxy S8 to help revive smartphone momentum after scrapping the fire-prone Galaxy Note 7. Investors and analysts say the Galaxy S8 must be a strong device for Samsung to win back customers and revive earnings momentum.
Samsung did not comment on what types of services would be offered through the AI assistant that will be launched on the Galaxy S8, which is expected to go on sale early next year. It said the AI assistant would allow customers to easily use third-party services.
Samsung Executive Vice President Rhee Injong said that developers can attach and upload services to Samsung’s AI.
“Even if Samsung doesn’t do anything on its own, the more services that get attached the smarter this agent will get, learn more new services and provide them to end-users with ease,” he said.
Google is widely considered to be the leader in AI, but Amazon, Microsoft and Apple have offerings which are include voice-powered digital assistants.
Google Brain, Google’s deep learning project, has started protecting information from prying eyes.
Researchers Martín Abadi and David Andersen found that computers could make their own form of encryption using machine learning, without being taught specific cryptographic algorithms.
OK the encryption was basic, but neural nets “are generally not meant to be great at cryptography”.
The Google Brain team has three neural nets called Alice, Bob and Eve. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.
To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.
Initially they were rubbish at it but Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.
After 15,000 times, Bob could convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.
Practical implications for the technology are limited, but it does mean that computers can develop and hide secrets from their human masters.
The cocaine nose jobs of Wall Street have reached the conclusion that they cannot trust humans to police their own dirty deals and are turning to AI instead.
Two exchange operators have announced plans to launch artificial intelligence tools for market surveillance in the coming months and officials at a Wall Street regulators say they are about to do the same thing.
The software could snuffle around chat-room messages to detect dubious bragging around a big trade. It could be used to unravel complex issues, like “layering,” where orders are rapidly sent to exchanges and then canceled to move a stock price.
Tom Gira, executive vice president for market regulation at the Financial Industry Regulatory Authority (FINRA) said that the software could track down something dodgy which no one has thought of before.
FINRA plans to test the AI software next year, while Nasdaq and the London Stock Exchange Group expect to use it by the end of the year. The exchange operators also plan to sell the technology to banks and fund managers, so that they can monitor their traders.
Market surveillance already relies on algorithms to detect patterns in trading data that may signal manipulation and prompt staff to investigate. But the problem is that the high volume of data can lead to an overwhelming number of alerts, most of which are false alarms.
The “machine learning” software it is developing will be able to look beyond those set patterns and understand which situations truly warrant red flags, said Gira.
Technology experts are starting to worry that human drivers will bully self-drive cars – simply because they can.
While self-driving cars promise to bring increased safety, comfort and speed to our roads. The rest of the road will be populated by men in white vans, BMW drivers and Italians who will make life hell for automated roadsters.
The London School of Economics and Goodyear conducted a study into social attitudes to self-driving technology. Drivers who are more “combative” will welcome the adoption of self-driving technology, because they assume it will be easier to “bully” self-driving cars than actual humans.
Self-driving cars will be programmed to avoid accidents, just as they should be. So given the choice between driving timidly or causing an accident just to prove a point, the self-driving car will slam on the brakes every time. The more aggressive drivers in this survey said that they’d treat self-driving cars like “learner drivers” and mess with their automatic heads.
One respondent he would be overtaking all the time because they’ll be sticking to the rules,” one Another said robot cars are going to stop. “So you’re going to mug them right off. They’re going to stop and you’re just going to nip around.”
So those who really should be self-driving are exactly the sort of people who should not be behind the wheel. It is only a matter of time before self-driving will require a psyche-test to see if they should be allowed to drive.
Two students at Carnegie Mellon University have designed an artificial intelligence program that is capable of beating human players in a death match game of 1993’s Doom.
Guillaume Lample and Devendra Singh Chaplot spent four months developing a program capable of playing first-person shooter games. The program made its debut at VizDoom (an AI competition that centered around the classic shooter) where it took second place despite the fact that their creation managed to beat human participants.
According to Lample and Chaplot their AI “allows developing bots that play the game using the screen buffer.” What that means is that the program learns by interpreting what is happening on the screen as opposed to following a pre-set series of command instructions alone. Basically it learns to play in exactly the same way a human player does.
Apparenlty the AI won one of the competition games by learning to duck and therefore making itself much harder to hit.
If it is developed then it could improve video game artificial intelligence design with your game AI getting better the longer you play the game. The next cunning plan is to get the game playing Quake, which uses a more complex 3D environment.