Category: Science

1,000 core chip developed

many-coreA team of boffins has emerged from its smoke filled labs with a microchip with 1,000 independent programmable processors.

The team, from the University of California, Davis, Department of Electrical and Computer Engineering, developed the energy-efficient 621 million transistor “KiloCore” chip so that it could manage 1.78 trillion instructions per second.

Team leader Bevan Baas, professor of electrical and computer engineering said that it could be the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university.

While other multiple-processor chips have been created, none exceed about 300 processors. Most of those were created for research purposes and few are sold commercially. IBM, using its 32 nm CMOS technology, fabricated the KiloCore chip.

Because each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

The processor is already adapted for wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacentre work.

Amazon’s new AI can tell if you are in a bad mood

screaming babyAmazon is about to spruce up the AI function on its Echo personal assistant so that it can tell if you are hacked off.

Researchers are working on natural-language-processing updates that will help it detect emotion in someone’s voice, as well as remember and connect known information about a user to their requests.

For example, if Alexa knows that a user lives in Mill Street in Oxford, it’ll factor in that information when deciding how to answer the question “who is singing at the Kite tonight?” It will know that the user is not asking about kites but the pub they most like sleeping under the tables of.

If Alexa knows its master likes to listen to popular beat combo artist Kanye West, it’ll be more likely to know that it is working with an illiterate, tone deaf moron who has no concept of music – and its user is just as bad.

But spotting emotion is important.  If Alexa can tell if you are upset or angry it can come up with all the emotional responses which are designed to soothe you. It might be the first to say “sorry” when you get mad at yourself for paying so much for it when there might be better personal AI servants on the market.

Rosalind Picard, a professor at MIT’s Media Lab, says adding emotion sensing to personal electronics could improve them: “Yes, definitely, this is spot on.” In a 1997 book, Affective Computing, Picard first mentioned the idea of changing the voice of a virtual helper in response to a user’s emotional state. She notes that research has shown how matching a computer’s voice to that of a person can make communication more efficient and effective. “There are lots of ways it could help,” she says.

World’s first computer told fortunes just like Big G

antikethera-mechanismeThe world’s first computer was the Ancient Greek IDC or Gartner Group of its day, according to researchers.

The Antikythera Mechanism was once thought to be used for navigation but a decades-long investigation into the 2,000-year-old-device has worked out that it may have been used for more than just astronomy and was a key divination tool.

It had been known that the bronze gears and displays was used to predict lunar and solar eclipses, along with the positions of the sun, moon, and planets.  However, without a user manual, boffins have been trying to work out what it did using the same method that people work out how to programme their video recorders.

The Katerina Laskaridis Historical Foundation Library in Greece had a deeper look into the tiny inscriptions meticulously etched onto the outer surfaces of its 82 surviving fragments. Some of these letters measure just 1.2 millimetres (1/20th of an inch) across, and are engraved on the inside covers and visible front and back sections of the device. To do it, the researchers used cutting-edge imaging techniques, including x-ray scanning.

Mike Edmunds, a professor of astrophysics at Cardiff University said that the original investigation was intended to see how the mechanism works, and that was very successful.

“What we hadn’t realized was that the modern techniques that were being used would allow us to read the texts much better both on the outside of the mechanism and on the inside than was done before.”

There are 3,500 characters of explanatory text within the device.

The researchers described the machine as a kind of philosopher’s instructional device. The new analysis confirms that the mechanism displayed planets, while also showing the position of the sun and the moon in the sky. This was because it was used for divination. The researchers suspect this because some of the inscriptions on the device refer to the colour of a forthcoming eclipse.

The colour of an eclipse was some sort of omen or signal. Some colours might be better for what’s coming than others.

It was not a research tool for astronomers; it was more something you would use to teach about the cosmos and our place in the cosmos.

There is nothing in the Greek to suggest it could be used by an Ancient Version of IDC predicting a downturn in the Antikythera Mechanism, but it could well have been.

Google wants to create artificial Roman drivers


toyotahybrid-20140417113203813 (1)One of the technical challenges of self-driving cars is making the automatic pilots behave like humans and in some cases that means honking the horn.

In Rome, honking the horn has a complex etiquette which often leads to wild gestures and swearwords related to the drivers’ testicles or lack thereof, and the fact that the Virgin Mary might have actually been a pig.

Google is apparently discussing how its cars will communicate with human drivers in other cars to make sure they don’t kill themselves. The strategy, which is teach the autonomous cars how to honk at them, will go down like cold Quinto Quarto.

Google says 94 percent of minor crashes are caused by human error, so to combat this, the Menlo-Park, California-based company’s autonomous cars are going to need to whip us fallible beings into shape by disciplining us when we misbehave.

The company says the point of the honking software is to “recognise when honking may help alert other drivers to our car’s presence — for example, when a driver begins swerving into our lane or backing out of a blind driveway.”

Google said that during testing, it taught our vehicles to distinguish between potentially tricky situations and false positives, i.e. the difference between a car facing the wrong way during a three-point turn, and one that’s about to drive down the wrong side of the road.

“At first, we only played the horn inside the vehicle so we wouldn’t confuse others on the road with a wayward beep. Each time our cars sound the horn, our test drivers take note whether the beep was appropriate, and this feedback helps our engineering team refine our software further.”

Unlike Rome with its single toot which means something like “the light is actually green now you might wish to move” or a long toot which means “If you pull out now I will kill you and all your family and dance on their rotting bodies” Google has come up with various types of honks.

“We’ve even taught our vehicles to use different types of honks depending on the situation. If another vehicle is slowly reversing towards us, we might sound two short, quieter pips as a friendly heads up to let the driver know we’re behind. However, if there’s a situation that requires more urgency, we’ll use one loud sustained honk.”

We will not believe that it is effective until the car automatically winds down the window and extends an automatic fist and another driver.


Musk thinks we are all living in an advanced computer game

elon-musk-tesla-109Tesla Carmaker Elon Musk believes that we are almost certainly computer-generated entities living inside a more advanced civilization’s video game.

Talking to the assorted throngs at Recode’s annual Code Conference Musk said that soon computer games will reach the point where they will become indistinguishable from reality.

His logic is it would be unlikely that we are the base reality and are already involved in one.

“There’s a one in billions chance we’re in base reality. Arguably we should hope that that’s true, because if civilization stops advancing, that may be due to some calamitous event that erases civilization. So maybe we should be hopeful this is a simulation, because otherwise we are going to create simulations indistinguishable from reality or civilization ceases to exist. We’re unlikely to go into some multimillion-year stasis.”

Musk appears inspired by philosopher Nick Bostrom’s paper “Are You in a Computer Simulation?”

Bostrom claimed that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations.

If those simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race.

Ordinary people are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears.

Google claims its TPU improves machine learning

victorian-education-2Google claims that its Tensor Processing Unit (TPU), advances machine learning capability by a factor of three generations.

Google CEO Sundar Pichai told the Google’s I/O developer conference that TPUs deliver an order of magnitude higher performance per watt than all commercially available GPUs and FPGA.

Pichai said the chips powered the AlphaGo computer that beat Lee Sedol, the world champion in the incredibly complicated game called Go. Google still is not going into details of the Tensor Processing Unit but the company did disclose a little more information in its blog.

“We’ve been running TPUs inside our data centres for more than a year, and have found them to deliver an order of magnitude better-optimised performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law),” the blog said. “TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.”

The tiny TPU can fit into a hard drive slot within the data centre rack and has already been powering RankBrain and Street View, the blog said.

What Google is not saying is what a TPU actually is and if it will be a replacement for a CPU or a GPU. Word on the street is that the TPU could be a form of chip that implements the machine learning algorithms that are crafted using more power hungry GPUs and CPUs.

Secret meeting mulls creating plastic humans

1431613943_valeriya-lukyanova-467More than a hundred scientists, lawyers, and entrepreneurs gathered in secret to discuss the radical possibility of creating a synthetic human genome.

According to the New York Times attendees were told to keep a tight lip about what took place, but someone must have dropped a hint to the press.  Synthetic human genome is a big step up from gene editing – it uses chemicals to manufacture all the DNA contained in human chromosomes. It relies on the custom-designed base pair series and geneticists wouldn’t be bound by the two base pairs produced by nature.

They could, in theory build microbes, animals and humans. So a company could build the right human for the job.

Obviously this is ethically a minefield and the world of science appears to have not really got the hang of how to succeed in getting the public on its side.  It seems to think that if there is a public debate, then religious nutjobs will lean on politicians who will put the lid on the whole thing. However keeping the meeting secret though has created an internet conspiracy stir and reports of the meeting appear to be getting out of hand.

George Church, a professor of genetics at Harvard medical school and a key organizer of the proposed project said that the meeting wasn’t really about synthetic human genomes, but rather it was about efforts to improve the ability to synthesize long strands of DNA, which geneticists could use to create all manner of animals, plants and microbes.

Yet the original name of the project was “HGP2: The Human Genome Synthesis Project”. What’s more, an invitation to the meeting clearly stated that the primary goal would be “to synthesise a complete human genome in a cell line within a period of ten years”.

Church said the meeting was secret because his team has submitted a paper to a scientific journal, and they’re not supposed to discuss the idea publicly before publication.

Church does want to build a complete human genome in a cell line within ten years. So far scientists have synthesized a simple bacterial cell.

Watson gets a job as a lawyer

stupid-lawyer1Biggish Blue’s AI supercomputer Watson has just got a job as a bankrupcy lawyer.

Global law firm Baker & Hostetler has bought itself Ross, the first artificially intelligent attorney built by ROSS Intelligence. Ross will be employed in the law firm’s bankruptcy practice which currently employs more than 50 lawyers.

Ross can understand your questions, and respond with a hypothesis backed by references and citations. It improves on legal research by providing you with only the most highly relevant answers rather than thousands of results you would need to sift through.

It constantly monitors current litigation so that it can notify you about recent court decisions that may affect your case, and it will continue to learn from experience, gaining more knowledge and operating more quickly, the more you interact with it.

Andrew Arruda, ROSS Intelligence co-founder and CEO, other law firms have signed for licences with Ross, and more announcements are expected.

It is nice that lawyers will be the first race of sharks to be wiped out by our robotic overlords.  If we could replace politicans next that would be even better.


Smartphones give us ADHD symptoms

mobileSmartphone use is creating similar symptoms to  Attention Deficit Hyperactivity Disorder (ADHD) a new study has suggested.

Research Associate in Psychology, University of Virginia Kostadin Kushlev, recruited 221 students at the University of British Columbia to participate in a two-week study to look at the effects of smartphones on them.

During the first week, he asked half the participants to minimise phone interruptions by activating the “do-not-disturb” settings and keeping their phones out of sight and far from reach. We instructed the other half to keep their phone alerts on and their phones nearby whenever possible. In the second week participants who had used their phones’ “do-not-disturb” settings switched on phone alerts. The order in which we gave the instructions to each participant was randomly determined by a flip of a coin.

Then Kushlev measured inattentiveness and hyperactivity by asking participants to identify how frequently they had experienced 18 symptoms of ADHD over each of the two weeks. These items were based on the criteria for diagnosing ADHD in adults as specified by the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM-V). The results were more frequent phone interruptions made people less attentive and more hyperactive.

ADHD is a neurodevelopmental disorder and Kushlev is not saying that smartphones can cause ADHD, nut the findings suggest that people can act like it. He thinks that smartphones could be harming the productivity, relationships and well-being of millions.

” Our findings suggest that our incessant digital stimulation is contributing to an increasingly problematic deficit of attention in modern society. So consider silencing your phone – even when you are not in the movie theater. Your brain will thank you,” Kushlev wrote.

Top musicians look for medical cures

peter-gabriel-850-100Muscians Peter Gabriel, St. Vincent , Jon Hopkins, and Esa-Pekka Salonen are helping an initiative headed up by former Nokia design head Marko Ahtisaari — explore the future of musical medicine.

The four musicians are going to help The Sync Project as advisors, roles that’ll necessitate working with the scientists researching music’s therapeutic properties and helping to raise the project’s awareness.

Gabriel and St. Vincentare art-rock veterans, Hopkins is an accomplished electronic producer, and Salonen conducts the London Philharmonia Orchestra. However Ahtisaari is more interested in their value as thinkers than their musical legends.

He said that he needed musicians and creators who have an active relationship with technology. It wasn’t so much about the contents of the music, or to commission any work, it was because they were creative thinkers.

The idea is to build a biometric recommendation engine for music and create musical treatment programs for medical conditions that match the efficacy of drug-based treatment without subjecting patients to the dangers and side effects of pharmacological programs.

Ahtisaari cites treatment for Parkinson’s disease as an example. Users could contribute data from their streaming service of their choice and sensors from their phones or wearable devices that characterize their physical response to certain music.

Collected in bulk, that data could inform more specific clinical trials testing the effects of various musical qualities on patient mobility.

The final result would be a personalized playlist, one that aids movement and changes with the patient’s activity.

The project’s musical advisors can’t shape its medical aspects, but Ahtisaari is hoping they can help push the conversation regarding music’s therapeutic potential forward among both musicians and listeners.