Tag: brain

SatNav makes you stupid

Using SatNav messes with the parts of your brain which help you navigate normally.

Boffins writing in the journal Nature Communications rscanned the brains of 24 volunteers as they explored a simulation through the streets of London’s Soho district.

The researchers from the University of London found that listening to a satellite navigation’s instructions “switched off” activity in parts of the brain used for navigation.

A bit of the brain called the hippocampus, which is involved in both memory and spatial navigation, appears to encode two different maps of the environment. The first tracks the distance to the final destination as the crow flies and is encoded by the frontal region of the hippocampus, the other tracks the “true path” to the goal and is encoded by its rear region.

During the navigation tasks, the hippocampus acts like a flexible guidance system, flipping between these two maps according to changing demands. Activity in the hippocampal rear region acts like a homing signal, increasing as the goal gets closer.

Analysis of the brain-scanning data revealed activity in the rear right of the hippocampus increased whenever the participants entered a new street while navigating. It also varied with the number of new path options available. The more alternatives there were, the greater the brain activity.

The researchers also found that activity in the front of the hippocampus was associated with a property called centrality, defined by the proximity of each new street to the centre of the network.

Activity could be seen in the participants’ prefrontal cortices when they were forced to make a detour and had to replan their route — and this increased in relation to the number of options available.

However when participants followed SatNav instructions, brain activity in these regions “switched off” and the whole lot had a snooze.

Together, the new findings suggest the rear portion of the hippocampus reactivates spatial memories of possible navigation paths, with more available paths evoking more activity, and that the prefrontal cortex may contribute to path-planning by searching though different route options and selecting the best one.

Silicon Valley’s top brains try to sort out the singularity

Some of Silicon Valley’s top brains are trying to work out how to stuff their grey matter into the machines they build.

Bryan Johnson, the founder of Braintree online payments, and Elon Musk have both been trying to work out how to store their brains on their PCs to obtain a form of immortality.

According to MIT Technology Review, Johnson is effectively jumping on an opportunity created by the Brain Initiative, an Obama-era project which ploughed money into new schemes for recording neurons.

That influx of cash has spurred the formation of several other startups, including Paradromics and Cortera, also developing novel hardware for collecting brain signals. As part of the government brain project, the defense R&D agency DARPA says it is close to announcing $60 million in contracts under a program to create a “high-fidelity” brain interface able to simultaneously record from one million neurons – the current record is about 200 – and stimulate 100,000 at a time.

Several tech sector luminaries are looking for technology that might fuse human and artificial intelligence. In addition to Johnson, Elon Musk has been teasing a project called “neural lace,” which he said at a 2016 conference will lead to “symbiosis with machines”.

And Mark Zuckerberg declared in a 2015 Q&A that people will one day can share “full sensory and emotional experiences,” not just photos. Facebook has been hiring neuroscientists for an undisclosed project at Building 8, its secretive hardware division.

However, Elon Musk has been also moaning that the current speeds for transferring signals from brains are “ridiculously slow”.

 

A brain does not work like a computer chip

mybrainhurtsAccording to the BBC, a processor is the brain of a computer, but it seems that the hardware has neuroscientists baffled.

A paper published in PLOS Computational Biology wondered if more information is the same thing as more understanding. Eric Jonas of the University of California, Berkeley, and Konrad Kording of Northwestern University, in Chicago, who both have backgrounds in neuroscience and electronic engineering, reasoned that a computer was a good way to test the analytical toolkit used by modern neuroscience. However they had to admit that they were wrong.

They took an MOS Technology 6502 chip which was first produced in 1975 and famous for powering, among other things, early Atari, Apple and Commodore computers. It has 3,510 transistors and is simple enough to create simulation that can model the electrical state of every transistor, and the voltage on every one of the thousands of wires connecting those transistors to each other, as the virtual chip runs a particular program.

The simulation produces about 1.5 gigabytes of data a second—a large amount, but well within the capabilities of the algorithms currently employed to probe the mysteries of biological brains.

But brain science and electronic science started to diverge in the test. For example if you damage part of the brain you know what is going to be stuffed up. A chip though comes up with false positives.

Disabling one particular group of transistors prevented the chip from running the boot-up sequence of “Donkey Kong but allowed it to run other games.

If it were a brain you would think that transistors were thus uniquely responsible for “Donkey Kong” but the reality is that it is just part of a circuit which implements a much more basic computing function that is crucial for loading one piece of software, but not some others.

The bofins  looked for correlations between the activity of groups of nerve cells and a particular behavior but when they tried to apply this to the chip, the researchers’ algorithms found five transistors whose activity was strongly correlated with the brightness of the most recently displayed pixel on the screen.

Jonas and Kording know that these transistors are not directly involved in drawing pictures on the screen and they are only involved in the trivial sense that they are used by some part of the program which is ultimately deciding what goes on the screen.

Jonas said that neuroscience techniques failed to find many chip structures that the researchers knew were there, and which are vital for comprehending what is actually going on in it.

In fact, all the neuroscientists’ algorithms could detect in the chip was the master clock signal, which co-ordinates the operations of different parts of the chip.
In short, computers and brains have got as much in common as a packet of crisps has with the Empire State Building. This means that the BBC will have to find a new simile.

Samsung to release its own artifical intelligence helper

mybrainhurtsSamsung is to launch an artificial intelligence digital assistant service for its upcoming Galaxy S8 smartphone.

It had been expected. Samsung recently bought Viv Labs, a firm run by a co-creator of Apple Siri voice assistant. Samsung plans to integrate the outfit’s AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.

Samsung wants its Galaxy S8 to help revive smartphone momentum after scrapping the fire-prone Galaxy Note 7. Investors and analysts say the Galaxy S8 must be a strong device for Samsung to win back customers and revive earnings momentum.

Samsung did not comment on what types of services would be offered through the AI assistant that will be launched on the Galaxy S8, which is expected to go on sale early next year. It said the AI assistant would allow customers to easily use third-party services.

Samsung Executive Vice President Rhee Injong said that developers can attach and upload services to Samsung’s AI.

“Even if Samsung doesn’t do anything on its own, the more services that get attached the smarter this agent will get, learn more new services and provide them to end-users with ease,” he said.

Google is widely considered to be the leader in AI, but Amazon, Microsoft and Apple have offerings which are include voice-powered digital assistants.

Microsoft builds a new AI unit

mybrainhurtsSoftware King of the world, Microsoft has said that it has created a new artificial intelligence unit.

Vole is apparently diving into artificial intelligence (AI)and machine learning research world head first rather than dipping its toes in the water first.

Microsoft teamed up with four other big technology companies  including Amazon, Google, Facebook and IBM – to create a non-profit organisation to advance public understanding of AI technologies.

Vole’s new unit — Microsoft AI and Research Group — will be headed by Harry Shum, a company veteran who has held senior roles at the Microsoft Research and Bing engineering divisions.

“Microsoft has been working in artificial intelligence since the beginning of Microsoft Research, and yet we’ve only begun to scratch the surface of what’s possible,” Shum said in a statement.

Chief Executive Satya Nadella has previously said the company’s $26.2 billion deal for LinkedIn Corp is expected to help bolster its efforts in analytics, machine learning and AI.

Microsoft has also been acquiring companies to expand its AI tech. The company in February acquired SwiftKey, a maker of predictive keyboard app. And last month it bought Genee, an AI-based scheduling service.

 

ABC suspends hack over “wi-fi cooked my brain” story

img_3797The Aussie ABC science program Catalyst is under review after the second major breach of editorial standards in several years after the programme churned out another Facebook-style conspiracy story.

The Corporation’s independent Audience and Consumer Affairs unit has found a story on the safety of Wi-Fi was in breach of editorial policies on accuracy and impartiality.

The problem centres on a story Catalyst aired Wi-Fried about the safety of wireless devices such as mobile phones. Basically the item churned out the sort of conspiracy nonsense about wi-fi’s cooking your brain which you expect to see on Facebook, along with fantasies about Chem trails.

This is the second time Catalyst’s programming has dumbed itself down by ignoring science to push Facebook style conspiracy theories. The Audience and Consumer Affairs Unit found a story aired in October 2013 on statins and heart disease was not up to standards of impartiality.

The person responsible for both programmes was Dr Maryanne Demasi. She has been apparently suspended from on-air reporting until the review of Catalyst is completed in September.

Dr Demasi is making no comment but she did defend the broadcast in the Huffington Post http://www.huffingtonpost.com.au/maryanne-demasi/sometimes-asking-questions-provides-you-with-answers-that-may-be-uncomfortable_b_9267642.html claiming that sometimes you have to ask questions.

“Catalyst was accused of scaremongering. It’s an overused term. It’s routinely used in politics to dismiss opposition policies. Reporting on terrorist threats, the Zika virus and crime sprees could also be argued to cause anxiety among the general population. But it’s a price we’re all willing to pay for free and diverse speech,” she said.

 

Brainwaves are the new fingerprints

mind readingA team of boffins has worked out a way of telling who you are by reading your mind.

Researchers at Binghamton University in US  say their ‘brain prints’ are 100 percent accurate and might have a new life in ultra secure systems.

They looked at the brain activity of 50 people wearing an electroencephalogram (EEG) headset who were asked to looked at a series of 500 images designed specifically to elicit unique responses from person to person – for example a slice of pizza, a boat, or the word “conundrum”.

They found that participants’ brains reacted differently to each image, enough that a computer system was able to identify each volunteer’s ‘brainprint’ with 100 percent accuracy.

Assistant Professor Sarah Laszlo said that when you take hundreds of these images, where every person is going to feel differently about each individual one, then you can be really accurate in identifying which person it was who looked at them just by their brain activity.

According to Laszlo, brain biometrics are appealing because they are cancellable and cannot be stolen by malicious means the way a finger or retina can.

“In the unlikely event that attackers were actually able to steal a brainprint from an authorised user, the authorised user could then ‘reset’ their brainprint,” Laszlo said.

Zhanpeng Jin, assistant professor at Binghamton University, does not see this as the kind of system that would be mass-produced for low security applications, but it could have important security applications.

“We tend to see the applications of this system as being more along the lines of high-security physical locations, like the Pentagon or Air Force Labs, where there aren’t that many users that are authorised to enter, and those users don’t need to constantly be authorising the way that a consumer might need to authorise into their phone or computer,” Jin said.

Graphene could create a computer brain

mybrainhurtsFlakes of graphene might be the the key to building computer chips that can processes information similar to human brain does – not your brain of course, or mine, but a better class of brain .

The technology is centred on neuromorphic chips which are made up of networks of transistors that interact the way human neurons do. This means that they can process analog input, such as visual information, quicker and more accurately than traditional chips.

Bhavin Shastri, a postdoctoral fellow in electrical engineering at Princeton University said that one way of building such transistors is to construct them of lasers that rely on an encoding approach called “spiking.”

Depending on the input, the laser can provide a brief spike in its output of photons or not respond at all. Instead of using the on or off state of the transistor to represent the 1s and 0s of digital data, these neural transistors rely on the time intervals between spikes.

Shastri said: “We’re essentially using time as a way of encoding information. Computation is based on the spatial and temporal positions of the pulses. This is sort of the fundamental way neurons communicate with other neurons.”

Shastris work with Lawrence Chen, a professor of electrical and computer engineering at McGill University, is trying to get the laser to spike at picosecond time scales which are one trillionth of a second.

They managed to do this by putting a tiny piece of graphene inside a semiconductor laser. The graphene acts as a “saturable absorber,” soaking up photons and then emitting them in a quick burst.

Graphene is a good saturable absorber because it can take up and release a lot of photons extremely fast, and it works at any wavelength.It also stands up very well to all the energy produced inside a laser.

 

Boffins cure brain computer headaches

mybrainhurtsUC San Diego scientists have constructed a new kind of computer that stores information and processes it in the same place and thus solved some of the bottlenecks of conventional computing.

The “memcomputer” can solve a problem involving a large dataset more quickly than conventional computers, while using far less energy.

The machine is currently in bits as a proof of concept, but can be improved into a general-purpose computer.

Researchers led by Massimiliano Di Ventra, a UCSD professor of physics said that memcomputers could equal or surpass the potential of quantum computers, they say, but because they don’t rely on exotic quantum effects are far more easily constructed.

Di Ventra said that besides solving extremely complex problems involving huge amounts of data, memcomputers can potentially teach us more about how the brain operates,. While the brain is often compared to a computer, the two are organized and operate much differently.

According to the journal Science Advances which we get for the draw in the quantum dot puzzles, conventional computers store data in one location designated for memory, and transfer it to processors located elsewhere to computer answers. But the human brain combines storage and processing in one place, treating these as one combined entity.

Memcomputers combine the storage and processing functions in a “collective state,” this complex signal actually contains the problem solution, which in theory can be easily extracted. The prototype demonstrates this can be done.

Studying this fault-tolerant property could teach us more about how brains work, and how they break down, Di Ventra said.

“From memcomputing we can learn for instance the ability of the network of interconnected memprocessors in bypassing broken connections, namely how robust is such a network to damage of its units while still able to compute specific tasks,” Di Ventra said. “This could possibly translate in our understanding of the maximum amount of damage to neurons done by degenerative diseases, like Alzheimer’s, before we lose specific functions.”

The study represents a significant advance in the field, said Yuriy V. Pershin, another researcher who has collaborated with Di Ventra and Traversa, but did not take part in this study.

There is still a long way to go, the prototype memcomputer is limited because it is analog, not digital. Analog computing is especially susceptible to interference from noise, which limits the ability to scale up the numbers of memprocessors in one computer.

Your brain can remember involved passwords

mybrainhurtsResearchers looking into people’s password habits have discovered that they can remember complex passwords if they are slowly trained.

Many people think they can’t remember secure passwords but the boffins found that they can, they just do not learn how to remember them correctly.

Joseph Bonneau, one of the two researchers who created the study got a group of volunteers to log into a website 90 times over the span of ten days, using whatever password they chose.

After entering their password, the website showed the volunteers a short security code, made of either four random letters or two random words, and asked them to type it. Throughout the ten-day experiment, the site added more letters and words to the code—up to 12 random letters or six random words—and the security code would take just a little longer to be displayed, prompting the participants to remember it themselves before it appeared.

Three days after the last login, 94 percent of the test subjects could remember their random code word or phrase, which were seemingly nonsensical strings of characters like “dkce2121sdd” or phrases like “fruit, bat klingon Yeats, snow, trousers.”

Bonneau said that there was a big dimension of human memory that hasn’t been explored with passwords,