Tag: ai

UK sorts out insurance for self-driving cars

accidentcarinwashingtondcThe UK plans to introduce new insurance rules to ensure victims of accidents involving self-driving cars are compensated quickly.

The move will remove a major obstacle for the nascent industry. Self-driving car introduction has been hampered by legal hurdles in several countries as insurers and legislators try to establish who would ultimately be responsible in the event of an accident.

Transport Minister Chris Grayling said the public needed to be protected in the event of an incident and the framework to allow insurance for these new technologies will be out this week.

A single insurance product will be available to cover a driver when a vehicle is being used conventionally, as well as when the car is being used in autopilot mode, the transport ministry said in a statement.

The Blighty government wants to encourage the development and testing of autonomous driving technology to build an industry to serve a market it reckons could be worth about $1.1 trillion worldwide by 2025.

Japanese carmaker Nissan is due to test autonomous cars in London later this month after initial tests on public roads in the southern English town of Milton Keynes late last year.

The UK will also set out plans to improve infrastructure such as charging points for electric vehicles, the fastest growing sector for new car sales in the country and key to meeting environmental targets.

Oculus ordered to pay up on ZeniMax tech

keep_calm_and_love_your_patent_lawyer_2_inch_round_magnet-re8c2c059dc99401ca676f1a1e58344f5_x7js9_8byvr_324A US jury in Texas ordered Oculus, and other defendants to pay a combined $500 million to ZeniMax Media, a video game publisher that claims Oculus stole its technology.

The jury thought that in 2014, Oculus used ZeniMax’s computer code to launch the Rift virtual-reality headset. ZeniMax alleges that video game designer John Carmack developed core parts of the Rift’s technology while working at a ZeniMax subsidiary. Oculus hired Carmack in 2013.

ZeniMax Chief Executive Robert Altman hailed the verdict and said in a statement the company was considering seeking an order blocking Oculus and Facebook from using its code. It is unclear what impact that would have on the Rift’s market availability.

However, the jury ruled that none of the defendants misappropriated ZeniMax’s trade secrets, but it did think that Oculus’ use of computer code directly infringed on ZeniMax’s copyright. The jurors held Carmack and different Oculus co-founders Palmer Luckey and Brendan Iribe liable for forms of infringement.

The jury also found Oculus liable for breaching a non-disclosure agreement Luckey signed with ZeniMax in 2012, when he began corresponding about virtual reality with Carmack.

Carmack worked for id Software before that company was acquired by ZeniMax. He is now the chief technology officer at Oculus.

Facebook Chief Executive Mark Zuckerberg testified last month during the three-week trial that none of ZeniMax’s proprietary code was incorporated into the Rift.

In a statement, Oculus spokeswoman Emily Bauer noted the jury’s finding on trade secrets theft and said the company would appeal. “We’re obviously disappointed by a few other aspects of today’s verdict, but we are undeterred,” she said. “Oculus products are built with Oculus technology.”

Linked in and eBay millionaires invest to save us from AI

cybermen__quot_delete_quot__campaign_by_degaspiv-d33hjoaTwo millionaires have each committed $10 million to save the world from the troubles of AI.

Reid Hoffman, the founder of LinkedIn, and eBay founder Pierre Omidyar’s non-profit are spending a fortune funding academic research and development aimed at keeping artificial intelligence systems ethical.

The fund received an additional $5 million from the Knight Foundation and two other $1 million donations from the William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group. The $27 million reserve is being anchored by MIT’s Media Lab and Harvard’s Berkman Klein Center for Internet and Society.

While the project acknowledges that AI has its uses they are worried that things can go wrong. The most critical challenge is making sure that machines are not trained to perpetuate and amplify the same human biases that plague society.

The money will pay for research to investigate how socially responsible artificially intelligent systems can be designed to, say, keep computer programs that are used to make decisions in fields like education, transportation and criminal justice accountable and fair.

The group wants to talk with the public about and foster understanding of the complexities of artificial intelligence. The two universities will form a governing body along with Hoffman and the Omidyar Network to distribute the funds.

Nearly half of our current jobs will be gone in 25 years

Robby the Robot - Wikimedia CommonsWharton School of Business at the University of Pennsylvania has warned that all the developed nations on earth will see job loss rates of up to 47 per cent within the next 25 years.

The statistic is based on a recent Oxford University study and includes blue and white collar jobs. So far, the loss has been restricted to the blue collar variety, particularly in manufacturing so no one has cared that much as this has been happening since the 1960s.

The new trend is not creating new jobs either. By 2034, just a few decades, mid-level jobs will be by and large obsolete.

So far the benefits have only gone to the ultra-wealthy, the top 1 per cent. This coming technological revolution is set to wipe out what looks to be the entire middle class.

Accountants, doctors, lawyers, teachers, bureaucrats, and financial analysts beware: your jobs are not safe. Soon computers will analyze and compare reams of data to make financial decisions or medical ones. There will be less of a chance of fraud or misdiagnosis, and the process will be more efficient. Not only are these folks in trouble, such a trend is likely to freeze salaries for those who remain employed, while income gaps only increase in size.

Unfortunately the report suggests that it is too late to turn Luddite and break up the machines. Governments will need to sort out some form of retraining, although it is not clear what the nasty fleshy pink lumps can do that robots can’t.

 

 

AMD releases AI based Radeons with basic instinct

BasicInstinct002AMD is announcing a new series of Radeon-branded products today, targeted at machine intelligence and deep learning enterprise applications.

Dubbed the Radeon Instinct, the chip is a GPU-based solution for deep learning, inference and training. AMD has also issued a new free, open-source library and framework for GPU accelerators, dubbed MIOpen.

MIOpen is made for high-performance machine intelligence applications and is optimized for deep learning frameworks in AMD’s ROCm software suite.

The first products are the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and can manage 5.7 TFLOPS of peak FP16 performance when the wind is behind it and it is going downhill.

It also includes the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. AMD claims the MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W.

The Radeon Instinct MI25 accelerator uses AMD’s next-generation Vega GPU architecture and has a board power of approximately 300W. All the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow.

Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualisation.

Microsoft’s Chinese AI is clever enough to censor itself

beijing cybercafeSoftware King of the World has admitted that its Chinese flavoured AI chat bot will not talk about anything that the authorities behind the bamboo curtain don’t want them to talk about.

Xiaoice would not directly respond to questions surrounding topics deemed sensitive by the Chinese state including the Tiananmen Square massacre of 1989 or “Steamed Bun Xi,” a nickname of Chinese President Xi Jinping.

“Am I stupid? Once I answer you’d take a screengrab,” read one answer to a question that contained the words “topple the Communist Party.”

Mentioning Donald “Prince of Orange” Trump also drew an evasive response from the chat bot. “I don’t want to talk about it,” Xiaoice says. Fair enough who does?

Microsoft has admitted that there was some filtering around Xiaoice’s interaction.

“We are committed to creating the best experience for everyone chatting with Xiaoice,” a Microsoft spokesperson said. “With this in mind, we have implemented filtering on a range of topics.” The tech giant did not further elaborate to which specific topics the filtering applied.

Microsoft says that Xiaoice engages in conversations with over 40 million Chinese users on social media platform like Weibo and WeChat.

LingLong creates DingDong in smart home industry

Linglong-Dingdong-Lautsprecher-1024x576-31f5edc41d756a0cChinese outfit LingLong has created an AI based assistant it has dubbed the DingDong which is making a sing song in the consumer electronics market.

The gear has a music library of three million songs, can take memos and share updates regarding news, traffic and weather in what the firm calls ‘cinema-like sound quality’

It speaks Cantonese and Mandarin, which means it can roll into the lucrative Chinese market and get a head start on its Western rivals.

It costs $118 and answers questions, gives directions and plays music in high quality 320Kbps format

The device comes in four colours: red for prosperity, white for purity, black for money and purple because it is pretty.

In the west, Amazon is the leaders in this space. It released its Echo in 2014 – smart speaker powered by Alexa. Users can ask Alexa to do a range of activities such as request an Uber or order their usually from Dominos – and there is more than three million units in the world.

Most DingDong owners use the technology as a music player, or as someone to talk to.

Samsung to release its own artifical intelligence helper

mybrainhurtsSamsung is to launch an artificial intelligence digital assistant service for its upcoming Galaxy S8 smartphone.

It had been expected. Samsung recently bought Viv Labs, a firm run by a co-creator of Apple Siri voice assistant. Samsung plans to integrate the outfit’s AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.

Samsung wants its Galaxy S8 to help revive smartphone momentum after scrapping the fire-prone Galaxy Note 7. Investors and analysts say the Galaxy S8 must be a strong device for Samsung to win back customers and revive earnings momentum.

Samsung did not comment on what types of services would be offered through the AI assistant that will be launched on the Galaxy S8, which is expected to go on sale early next year. It said the AI assistant would allow customers to easily use third-party services.

Samsung Executive Vice President Rhee Injong said that developers can attach and upload services to Samsung’s AI.

“Even if Samsung doesn’t do anything on its own, the more services that get attached the smarter this agent will get, learn more new services and provide them to end-users with ease,” he said.

Google is widely considered to be the leader in AI, but Amazon, Microsoft and Apple have offerings which are include voice-powered digital assistants.

Google’s deep learning computer can keep things secret

Pepper the Robot, courtesy Xavier CareGoogle Brain, Google’s deep learning project, has started protecting information from prying eyes.

Researchers Martín Abadi and David Andersen found that computers  could make their own form of encryption using machine learning, without being taught specific cryptographic algorithms.

OK the encryption was basic, but neural nets “are generally not meant to be great at cryptography”.

The Google Brain team has three neural nets called Alice, Bob and Eve. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.

To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.

Initially they were rubbish at it but Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.

After 15,000 times, Bob could convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.

Practical implications for the technology are limited, but it does mean that computers can develop and hide secrets from their human masters.

Wall Street turns to AI

Robby the Robot - Wikimedia CommonsThe cocaine nose jobs of Wall Street have reached the conclusion that they cannot trust humans to police their own dirty deals and are turning to AI instead.

Two exchange operators have announced plans to launch artificial intelligence tools for market surveillance in the coming months and officials at a Wall Street regulators say they are about to do the same thing.

The software could snuffle around chat-room messages to detect dubious bragging around a big trade. It could be used to unravel complex issues, like “layering,” where orders are rapidly sent to exchanges and then canceled to move a stock price.

Tom Gira, executive vice president for market regulation at the Financial Industry Regulatory Authority (FINRA) said that the software could track down something dodgy which no one has thought of before.

FINRA plans to test the AI software next year, while Nasdaq and the London Stock Exchange Group expect to use it by the end of the year.  The exchange operators also plan to sell the technology to banks and fund managers, so that they can monitor their traders.

 

Market surveillance already relies on algorithms to detect patterns in trading data that may signal manipulation and prompt staff to investigate. But the problem is that the high volume of data can lead to an overwhelming number of alerts, most of which are false alarms.

The “machine learning” software it is developing will be able to look beyond those set patterns and understand which situations truly warrant red flags, said Gira.