AMD is announcing a new series of Radeon-branded products today, targeted at machine intelligence and deep learning enterprise applications.
Dubbed the Radeon Instinct, the chip is a GPU-based solution for deep learning, inference and training. AMD has also issued a new free, open-source library and framework for GPU accelerators, dubbed MIOpen.
MIOpen is made for high-performance machine intelligence applications and is optimized for deep learning frameworks in AMD’s ROCm software suite.
The first products are the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and can manage 5.7 TFLOPS of peak FP16 performance when the wind is behind it and it is going downhill.
It also includes the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. AMD claims the MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W.
The Radeon Instinct MI25 accelerator uses AMD’s next-generation Vega GPU architecture and has a board power of approximately 300W. All the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow.
Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualisation.
Software King of the World has admitted that its Chinese flavoured AI chat bot will not talk about anything that the authorities behind the bamboo curtain don’t want them to talk about.
Xiaoice would not directly respond to questions surrounding topics deemed sensitive by the Chinese state including the Tiananmen Square massacre of 1989 or “Steamed Bun Xi,” a nickname of Chinese President Xi Jinping.
“Am I stupid? Once I answer you’d take a screengrab,” read one answer to a question that contained the words “topple the Communist Party.”
Mentioning Donald “Prince of Orange” Trump also drew an evasive response from the chat bot. “I don’t want to talk about it,” Xiaoice says. Fair enough who does?
Microsoft has admitted that there was some filtering around Xiaoice’s interaction.
“We are committed to creating the best experience for everyone chatting with Xiaoice,” a Microsoft spokesperson said. “With this in mind, we have implemented filtering on a range of topics.” The tech giant did not further elaborate to which specific topics the filtering applied.
Microsoft says that Xiaoice engages in conversations with over 40 million Chinese users on social media platform like Weibo and WeChat.
Chinese outfit LingLong has created an AI based assistant it has dubbed the DingDong which is making a sing song in the consumer electronics market.
The gear has a music library of three million songs, can take memos and share updates regarding news, traffic and weather in what the firm calls ‘cinema-like sound quality’
It speaks Cantonese and Mandarin, which means it can roll into the lucrative Chinese market and get a head start on its Western rivals.
It costs $118 and answers questions, gives directions and plays music in high quality 320Kbps format
The device comes in four colours: red for prosperity, white for purity, black for money and purple because it is pretty.
In the west, Amazon is the leaders in this space. It released its Echo in 2014 – smart speaker powered by Alexa. Users can ask Alexa to do a range of activities such as request an Uber or order their usually from Dominos – and there is more than three million units in the world.
Most DingDong owners use the technology as a music player, or as someone to talk to.
Samsung is to launch an artificial intelligence digital assistant service for its upcoming Galaxy S8 smartphone.
It had been expected. Samsung recently bought Viv Labs, a firm run by a co-creator of Apple Siri voice assistant. Samsung plans to integrate the outfit’s AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.
Samsung wants its Galaxy S8 to help revive smartphone momentum after scrapping the fire-prone Galaxy Note 7. Investors and analysts say the Galaxy S8 must be a strong device for Samsung to win back customers and revive earnings momentum.
Samsung did not comment on what types of services would be offered through the AI assistant that will be launched on the Galaxy S8, which is expected to go on sale early next year. It said the AI assistant would allow customers to easily use third-party services.
Samsung Executive Vice President Rhee Injong said that developers can attach and upload services to Samsung’s AI.
“Even if Samsung doesn’t do anything on its own, the more services that get attached the smarter this agent will get, learn more new services and provide them to end-users with ease,” he said.
Google is widely considered to be the leader in AI, but Amazon, Microsoft and Apple have offerings which are include voice-powered digital assistants.
Google Brain, Google’s deep learning project, has started protecting information from prying eyes.
Researchers Martín Abadi and David Andersen found that computers could make their own form of encryption using machine learning, without being taught specific cryptographic algorithms.
OK the encryption was basic, but neural nets “are generally not meant to be great at cryptography”.
The Google Brain team has three neural nets called Alice, Bob and Eve. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.
To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.
Initially they were rubbish at it but Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.
After 15,000 times, Bob could convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.
Practical implications for the technology are limited, but it does mean that computers can develop and hide secrets from their human masters.
The cocaine nose jobs of Wall Street have reached the conclusion that they cannot trust humans to police their own dirty deals and are turning to AI instead.
Two exchange operators have announced plans to launch artificial intelligence tools for market surveillance in the coming months and officials at a Wall Street regulators say they are about to do the same thing.
The software could snuffle around chat-room messages to detect dubious bragging around a big trade. It could be used to unravel complex issues, like “layering,” where orders are rapidly sent to exchanges and then canceled to move a stock price.
Tom Gira, executive vice president for market regulation at the Financial Industry Regulatory Authority (FINRA) said that the software could track down something dodgy which no one has thought of before.
FINRA plans to test the AI software next year, while Nasdaq and the London Stock Exchange Group expect to use it by the end of the year. The exchange operators also plan to sell the technology to banks and fund managers, so that they can monitor their traders.
Market surveillance already relies on algorithms to detect patterns in trading data that may signal manipulation and prompt staff to investigate. But the problem is that the high volume of data can lead to an overwhelming number of alerts, most of which are false alarms.
The “machine learning” software it is developing will be able to look beyond those set patterns and understand which situations truly warrant red flags, said Gira.
Technology experts are starting to worry that human drivers will bully self-drive cars – simply because they can.
While self-driving cars promise to bring increased safety, comfort and speed to our roads. The rest of the road will be populated by men in white vans, BMW drivers and Italians who will make life hell for automated roadsters.
The London School of Economics and Goodyear conducted a study into social attitudes to self-driving technology. Drivers who are more “combative” will welcome the adoption of self-driving technology, because they assume it will be easier to “bully” self-driving cars than actual humans.
Self-driving cars will be programmed to avoid accidents, just as they should be. So given the choice between driving timidly or causing an accident just to prove a point, the self-driving car will slam on the brakes every time. The more aggressive drivers in this survey said that they’d treat self-driving cars like “learner drivers” and mess with their automatic heads.
One respondent he would be overtaking all the time because they’ll be sticking to the rules,” one Another said robot cars are going to stop. “So you’re going to mug them right off. They’re going to stop and you’re just going to nip around.”
So those who really should be self-driving are exactly the sort of people who should not be behind the wheel. It is only a matter of time before self-driving will require a psyche-test to see if they should be allowed to drive.
Two students at Carnegie Mellon University have designed an artificial intelligence program that is capable of beating human players in a death match game of 1993’s Doom.
Guillaume Lample and Devendra Singh Chaplot spent four months developing a program capable of playing first-person shooter games. The program made its debut at VizDoom (an AI competition that centered around the classic shooter) where it took second place despite the fact that their creation managed to beat human participants.
According to Lample and Chaplot their AI “allows developing bots that play the game using the screen buffer.” What that means is that the program learns by interpreting what is happening on the screen as opposed to following a pre-set series of command instructions alone. Basically it learns to play in exactly the same way a human player does.
Apparenlty the AI won one of the competition games by learning to duck and therefore making itself much harder to hit.
If it is developed then it could improve video game artificial intelligence design with your game AI getting better the longer you play the game. The next cunning plan is to get the game playing Quake, which uses a more complex 3D environment.
After millions of miles of testing, it appears that Googles self-driving AI is a bit better than me the day after I passed my driving test at age 16.
Google announced that its self-driving car racked up two million miles of driving experience. It’s a significant marker for Google — no other company has that many miles of fully self-driving experience.
Google’s head of self-driving tech Dmitri Dolgov said the goal of all that is to build a prefect driver.
In two million miles in four cities, Google thinks it has taught the car to come from a nervous teen student driver who might drive onto the pavement if a truck gets too close (yeah I did) to the equivalent of a more experienced licensed person who drives daily.
Google has been involved in 14 real ones so far, 13 of which were caused by other drivers, which is pretty much what happened to me in my first driving years (one prang where I was rear ended while turning right by a bloke whose breaks had failed).
Apparently the car has reached the point where it does not make sudden stops, unless it really has to. It also slightly swerves as any expert driver does.
All this is possible because of the mileage that the program has been put through. If you think about it, two million miles in more than seven years is a lot. According to a Google spokesperson, the average human only drives 13,000 miles a year.
Software King of the world, Microsoft has said that it has created a new artificial intelligence unit.
Vole is apparently diving into artificial intelligence (AI)and machine learning research world head first rather than dipping its toes in the water first.
Microsoft teamed up with four other big technology companies including Amazon, Google, Facebook and IBM – to create a non-profit organisation to advance public understanding of AI technologies.
Vole’s new unit — Microsoft AI and Research Group — will be headed by Harry Shum, a company veteran who has held senior roles at the Microsoft Research and Bing engineering divisions.
“Microsoft has been working in artificial intelligence since the beginning of Microsoft Research, and yet we’ve only begun to scratch the surface of what’s possible,” Shum said in a statement.
Chief Executive Satya Nadella has previously said the company’s $26.2 billion deal for LinkedIn Corp is expected to help bolster its efforts in analytics, machine learning and AI.
Microsoft has also been acquiring companies to expand its AI tech. The company in February acquired SwiftKey, a maker of predictive keyboard app. And last month it bought Genee, an AI-based scheduling service.