Tag: mit

Silicon Valley’s top brains try to sort out the singularity

Some of Silicon Valley’s top brains are trying to work out how to stuff their grey matter into the machines they build.

Bryan Johnson, the founder of Braintree online payments, and Elon Musk have both been trying to work out how to store their brains on their PCs to obtain a form of immortality.

According to MIT Technology Review, Johnson is effectively jumping on an opportunity created by the Brain Initiative, an Obama-era project which ploughed money into new schemes for recording neurons.

That influx of cash has spurred the formation of several other startups, including Paradromics and Cortera, also developing novel hardware for collecting brain signals. As part of the government brain project, the defense R&D agency DARPA says it is close to announcing $60 million in contracts under a program to create a “high-fidelity” brain interface able to simultaneously record from one million neurons – the current record is about 200 – and stimulate 100,000 at a time.

Several tech sector luminaries are looking for technology that might fuse human and artificial intelligence. In addition to Johnson, Elon Musk has been teasing a project called “neural lace,” which he said at a 2016 conference will lead to “symbiosis with machines”.

And Mark Zuckerberg declared in a 2015 Q&A that people will one day can share “full sensory and emotional experiences,” not just photos. Facebook has been hiring neuroscientists for an undisclosed project at Building 8, its secretive hardware division.

However, Elon Musk has been also moaning that the current speeds for transferring signals from brains are “ridiculously slow”.

 

MIT boffins create 3D without need for glasses

Tigre-3DMIT boffins, fed up with having to watch movies with glasses over the top of their glasses have invented a 3-D experience that does not need them.

MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and Israel’s Weizmann Institute of Science have demonstrated a display that lets audiences watch 3-D films in a movie theatre without extra eyewear.

Dubbed “Cinema 3D,” the prototype uses a special array of lenses and mirrors to enable viewers to watch a 3-D movie from any seat in a theater.

While the researchers warn that the system isn’t market-ready, they are optimistic that future versions could push the technology to a place where theatres would be able to offer glasses-free alternatives for 3-D movies.

Glasses-free 3-D already exists, but not in a way that scales to movie theatres. Traditional methods for TV sets use a series of slits in front of the screen (a “parallax barrier”) that allows each eye to see a different set of pixels, creating a simulated sense of depth.

But parallax barriers have to be at a consistent distance from the viewer and this does not work for big theatres.

Cinema 3D encodes multiple parallax barriers into one display, such that each viewer sees a parallax barrier tailored to their position. That range of views is then replicated across the theater by a series of mirrors and lenses within Cinema 3D’s special optics system.

Cinema 3D’s prototype requires 50 sets of mirrors and lenses, and yet is just barely larger than a pad of paper. But, in theory, the technology could work in any context in which 3-D visuals would be shown to multiple people at the same time, such as billboards or storefront advertisements.

 

MIT speeds up browser loads with travelling salesperson

Dogcart3Researchers at MIT have worked out a way to download webpages 34 percent faster.

As websites become more complex, they take longer to load so MIT has been working on a new method which allows browsers gather files more efficiently.

Ravi Netravali, one of the researchers, in a press release said that the bottleneck is caused by the fact that pages require multiple trips that create delays.

The new approach called Polaris minimises the number of round trips so that we can substantially speed up a page’s load-time.

Dubbed Polaris it was developed by the University’s at Computer Science and Artificial Intelligence Laboratory. It logs all the dependencies and inter-dependencies on a web page. It compiles all of these into a graph for the page that a browser can use to download page elements more efficiently. The researchers liken it to the work of travelling salesperson.

When you visit one city, you sometimes discover more cities you have to visit before going home. If someone gave you the entire list of cities ahead of time, you could plan the fastest possible route. Without the list, though, you have to discover new cities as you go, which results in unnecessary zig-zagging between far-away cities, they said.

For a web browser, loading all of a page’s objects is like visiting all of the cities. Polaris effectively gives you a list of all the cities before your trip actually begins.

The team’s tested the system on 200 different websites, including ESPN, Weather.com, and Wikipedia. On average, it was able to load web pages 34 percent faster than a standard browser. The work will be presented later this week at the USENIX Symposium on Networked Systems Design and Implementation.

Polaris is written in JavaScript which means it can be introduced to any website—it’d just have to be running on the server in question, so it’d automatically kick in for any page load—and used with unmodified browsers.
In the long term it could be integrated into the browsers where it could “enable additional optimizations that can further accelerate page loads.

Quantum computer could kill encryption

funny-pictures-cat-hunting-miceMIT has created the first five-atom quantum computer with the potential to crack the security of traditional encryption schemes – if the cat can be bothered getting out of the box and chasing the numbers,

Quantum computing relies on atomic-scale units, or “qubits,” that can be simultaneously 0 and 1. It typically takes about 12 qubits to factor the number 15, but researchers at MIT and the University of Innsbruck in Austria have found a way to pare that down to five qubits, each represented by a single atom.

By using laser pulses to keep the quantum system stable by holding the atoms in an ion trap, the new system promises scalability as well, as more atoms and lasers can be added to build a bigger and faster quantum computer able to factor much larger numbers.

However that tiggers factorisation-based methods such as RSA, used for protecting credit cards, state secrets and other confidential data. This is because they are based on Shor’s algorithm, the most complex quantum algorithm known to date. Fifteen is the smallest number that can meaningfully demonstrate Shor’s algorithm. But this computer returned the correct factors with a confidence better than 99 percent.

Isaac Chuang, professor of physics and professor of electrical engineering and computer science at MIT said that Shor’s algorithm is the most complex quantum algorithm known to date and it is now possible to go to a lab and build a computer which can crack it.

“It might still cost an enormous amount of money to build—you won’t be building a quantum computer and putting it on your desktop anytime soon—but now it’s much more an engineering effort, and not a basic physics question,” Chuang added.

At the moment the MIT machine scalable enough but that is another engineering problem which will be fixed. Once the apparatus can trap more atoms and more laser beams can control the pulses there is no physical reason why that is not going to be in the cards.

Already though that does mean that nation states probably don’t want to publicly store your secrets using encryption that relies on factoring as a hard-to-invert problem. As quantum computers start coming out those old secrets will be unencrypted, Chuang said.

MIT comes up with deep learning for mobile

mybrainhurtsMIT researchers have emerged from their smoke filled labs with a new chip which can provide mobile gear deep learning properties.

At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to run mobile neural networks. The chip is 10 times as efficient as a mobile GPU and means mobile devices could run powerful AI algorithms locally.

Vivienne Sze, an assistant professor of electrical engineering at MIT whose group developed the new chip said that deep learning was useful for many mobile applications including object recognition, speech, face detection.

“Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

Dubbed Eyeriss, the new chip could be useful for the Internet of Stuff. AI armed networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

Sze and her colleagues used a chip with 168 cores, roughly as many as a mobile GPU has.

 

Eyeriss’s minimized the frequency with which cores need to exchange data with distant memory banks, an operation that consumes time and energy. The GPU cores share a single, large memory bank and each Eyeriss core has its own memory. The chip has a circuit that compresses data before sending it to individual cores.

Each core can communicate directly with its immediate neighbours, so that if they need to share data, they don’t have to route it through main memory.

The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.

At the conference, the MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

Don’t worry about your Dell blowing up

lemon batteryMIT Researchers have emerged from their smoke filled labs with a new material for a basic battery component that they say will enable almost indefinite power storage.

A solid electrolyte could not only increase battery life, but also storage capacity and safety, as liquid electrolytes are the leading cause of battery fires.

Lithium ion batteries use a liquid electrolyte which is an organic solvent that has been responsible for overheating and fires in cars, commercial airliners and mobiles.

With a solid electrolyte you could throw it against the wall, drive a nail through it — there’s nothing there to burn.

Gerbrand Ceder, a professor of materials science and engineering at MIT said that there’s virtually no degradation, meaning such batteries could last through hundreds of thousands of charges.

The researchers, who published their findings in the peer-reviewed journal Nature Materials, described the solid-state electrolytes as an improvement over today’s lithium-ion batteries.

A battery’s electrolyte component separates the battery’s positive cathode and negative anode terminals, and it allows the flow of ions between terminals. A chemical reaction takes place between the two terminals producing an electric current.

The idea of solid electrolytes has been around for a while. The problem was that they could not conduct ions fast enough to be efficient energy producers.

The MIT team says it overcame that problem which is why its boffins have nice creases on their white lab coats.

Solid-state lithium-ion battery is that it can perform under frigid temperatures which is more than most scientists.

MIT slashes battery cost by half

lemon batteryMIT boffins have emerged from their smoke filled labs with an advanced manufacturing approach for lithium-ion batteries which promises to slash the cost while also improving their performance and making them easier to recycle.

The method will be marketed by a spinoff company called 24M which claims to have re-invented the process of making lithium-ion batteries.

Yet-Ming Chiang, the Kyocera Professor of Ceramics at MIT, and a co-founder of 24M said that the existing process has hardly changed in the two decades since the technology was invented, and is inefficient, with more steps and components than are really needed.

The new process is based on a concept developed five years ago by Chiang and colleagues including W. Craig Carter, the POSCO Professor of Materials Science and Engineering. In this so-called “flow battery,” the electrodes are suspensions of tiny particles carried by a liquid and pumped through various compartments of the battery.

The new battery design is a hybrid between flow batteries and conventional solid ones: In this version, while the electrode material does not flow, it is composed of a similar semisolid, colloidal suspension of particles. Chiang and Carter refer to this as a “semisolid battery.”

Chiang said that this approach greatly simplifies manufacturing, and also makes batteries that are flexible and resistant to damage.

We realized that a better way to make use of this flowable electrode technology was to reinvent the manufacturing process.”

Instead of the standard method of applying liquid coatings to a roll of backing material, and then having to wait for that material to dry before it can move to the next manufacturing step, the new process keeps the electrode material in a liquid state and requires no drying stage at all. Using fewer, thicker electrodes, the system reduces the conventional battery architecture’s number of distinct layers, as well as the amount of nonfunctional material in the structure, by 80 percent.

Having the electrode in the form of tiny suspended particles instead of consolidated slabs greatly reduces the path length for charged particles as they move through the material — a property known as “tortuosity.” A less tortuous path makes it possible to use thicker electrodes, which, in turn, simplifies production and lowers cost.

Basically this will cut battery costs by half, and create a battery that is more flexible and resilient. While conventional lithium-ion batteries are composed of brittle electrodes that can crack under stress, the new formulation produces battery cells that can be bent, folded or even penetrated by bullets without failing. This should improve both safety and durability, he says.

You use phones less when you’re out of work

Telephone BoxA team of researchers from Massachusetts Institute of Technology (MIT) have discovered a correlation between whether people are working or not.

Perhaps unsurprisingly, people make more calls from home and don’t make as many phone calls as when they’re in work.

The scientists found – perhaps to no one’s great surprise – that in a European factory that was shut down, the total number of calls made by people unlucky to get made redundant, phone calls fell by 51 percent compared to people who still had jobs.

And it had a knock on effect on mobile towers, with usage falling by 20 percent.

Also, perhaps to no-one’s surprise apart from the researchers at MIT, people without jobs contact fewer people every month and the people they do call are different.

Jameson Toole, at MIT, said: “People’s social behaviour diminishes and that might be one of the ways layoffs have these negative consequences. It hurts the networks that might help people find the next job.”

Hardware can defend your cloud

Ary Pleysier - Beach View with Boats - Wikimedia CommonsA group of researchers has started to implement security in silicon that can help thwart nosey parkers or criminals from understanding what data is in your cloud.

The MIT researchers said two years ago they proposed a method for preventing outsiders by checking the way computers access memory banks.

The researchers said that they’ve already tested their methods on reconfigurable semiconductors and are moving into manufacturing these devices.

The chip improves security by checking that when data is fetched from a memory address, it will query other address too.

Although this puts stress on a system because extra data is involved, the MIT team said they store the memory addresses in a tree-like data structure, with every address randomly assigned to a path through the tree.

The chip they’ve designed avoids a performance overhead by having an additional memory circuit, with storage slots mapped onto the nodes in any path through the tree.

It discards all redundant or decoy data.

The circuits the MIT scientists have designed can be easily added to existing semiconductor designs and switched off or on as needed. So software engineers may activate it only when it’s needed, while other applications could use it all the time.

Multicore chips need to be mini-internets

Li-Shiuan Peh, the Singapore Research Professor of Electrical Engineering and Computer Science at MIT, said that in the future massively multicore chips will need to resemble little Internets.

Peh told the International Symposium on Computer Architecture that each core will need an associated router, and data travels between cores in packets of fixed size.

This week Peh’s group unveiled a 36-core chip that features just such a “network-on-chip” to make his point.

This chip fixes the cache coherence problems that have stuffed up previous attempts to design networks-on-chip. Until now ensuring that cores’ locally stored copies of globally accessible data remain up to date has been a problem.

Most chip cores are connected by a bus and when two cores need to communicate, they’re granted exclusive access to the bus.

But that approach won’t work as the core count mounts as cores spend all their time waiting for the bus and when one finally shows up several arrive at the same time.

Bhavya Daya, an MIT graduate student in electrical engineering and computer science, and first author on the new paper said that in a network-on-chip, each core is connected only to those immediately adjacent to it.

This means that it is possible to reach the neighbouring chip really quickly and have multiple paths to your destination. So if you’re going way across, rather than having one congested path, you could have multiple ones.

But the bus system makes it easier to maintain cache coherence. Every core on a chip has its own cache, a local, high-speed memory bank in which it stores frequently used data. As it performs computations, it updates the data in its cache, and every so often, it undertakes the relatively time-consuming chore of shipping the data back to main memory.

To fix the problems of another core needing the data before it’s been shipped, chips use a protocol called “snoopy,” because it involves snooping on other cores’ communications.

But in a network-on-chip, data is flying everywhere, and packets will frequently arrive at different cores in different sequences. The implicit ordering that the snoopy protocol relies on breaks down.

Daya, Peh, and their colleagues solve this problem by equipping their chips with a second network.

Groups of declarations reach the routers associated with the cores at discrete intervals — intervals corresponding to the time it takes to pass from one end of the shadow network to another. Each router can thus tabulate exactly how many requests were issued during which interval, and by which other cores. The requests themselves may still take a while to arrive, but their recipients know that they’ve been issued.

After testing the prototype chips to ensure that they’re operational, Daya intends to load them with a version of the Linux operating system, modified to run on 36 cores, and evaluate the performance of real applications, to determine the accuracy of the group’s theoretical projections.

At that point, she plans to release the blueprints for the chip, written in the hardware description language Verilog, as open-source code.