Tag: data centre

Princeton boffins come up with open source super-chip

mad scientistPrinceton University researchers have emerged from their smoke filled labs with a new open source computer chip that promises to boost the performance of data centres.

Dubbed “Piton” after the metal spikes driven by rock climbers into mountainsides to aid in their ascent the chip was shown off at the Hot Chips conference.

The Princeton researchers designed their chip specifically for massive computing systems. Piton could substantially increase processing speed while slashing energy usage. The chip architecture is scalable — designs can be built that go from a dozen to several thousand cores.

The architecture enables thousands of chips to be connected into a single system containing millions of cores.

David Wentzlaff, a Princeton assistant professor of electrical engineering and associated faculty in the Department of Computer Science said that Piton was based on a new thinking about computer architecture.  It was built specifically for data centers and the cloud.

“The chip we’ve made is among the largest chips ever built in academia and it shows how servers could run far more efficiently and cheaply.”

The current version of the Piton chip measures six millimetres by six millimetres and has 460 million transistors, each of which are as small as 32 nanometres.

The bulk of these transistors are contained in 25 cores. Most personal computer chips have four or eight cores.

In recent years companies and academic institutions have produced chips with many dozens of cores — but the readily scalable architecture of Piton can enable thousands of cores on a single chip with half a billion cores in the data centre, Wentzlaff said.

“What we have with Piton is really a prototype for future commercial server systems that could take advantage of a tremendous number of cores to speed up processing,” Wentzlaff said.

At a data centre, multiple users often run programs that rely on similar operations at the processor level. The Piton chip’s cores can recognise these instances and execute identical instructions consecutively, so that they flow one after another. Doing so can increase energy efficiency by about 20 percent compared to a standard core, the researchers said.

Piton chip parcels out when competing programs access computer memory that exists off of the chip so they do not clog the system. This approach can yield an 18 percent increase in performance compared to conventional means of allocation.

The Piton chip also gains efficiency by its cache memory management. In most designs, cache memory is shared across all of the chip’s cores. But when multiple cores access and modify the cache memory it is less efficient. Piton assigns areas of the cache and specific cores to dedicated applications. The researchers say the system can increase efficiency by 29 percent per chip.

Wentzlaff said. “We’re also happy to give out our design to the world as open source, which has long been commonplace for software, but is almost never done for hardware.”

Intel’s data centre business slows

snail-8296a552f7bd1064368205306ff8a3c7c7bdc7c4-s900-c85 Chipzilla produced a better than expected quarterly profit, however analysts are a bit worried about a slower than normal revenue growth for its data centre business.

If the outfit’s cunning plan to be less dependent on PCs succeeds then data centres should be front and centre but it seems that Chipzilla’s data centre business is shrinking.

It seems that there was generally weak demand from enterprises, causing revenue at the highly-profitable unit to rise five percent to $4 billion. Part of the problem was that Chipzilla did so well inthe  last quarter and increased business in this area by nine percent and this makes its growth this quarter look a bit sad.

Intel’s finance chief Stacy Smith said that as Intel enters the second half, he expect the enterprise segment of the business to stabilise and the cloud segment growth rate to accelerate.

Intel has been focusing on the unit and its operation that makes chips for internet-connected devices, as it seeks to lower dependence on the slowing PC market that it once helped create.

Sales from the Santa Clara, California-based chipmaker’s traditional PC business, which also includes chips for mobile phones and tablets, fell three percent to $7.3 billion in the second quarter ended July 2.

Global PC shipments fell less than expected in the quarter, helped by strength in the United States.

Over all Intel reported a better-than-expected profit as its cost-cutting begin to pay off. In April it announced plans to slash 12,000 jobs, or 11 percent of its global workforce, of which it said about half was already complete.

Intel’s forecast for $14.9 billion in current-quarter revenue topped the average analyst expectation of $14.63 billion.

Net income fell to $1.33 billion, or 27 cents per share, in the second quarter, from $2.71 billion a year earlier.

Profit for the quarter was hit by a one-time charge of $1.41 billion related to its cost-cutting drive.

Net revenue rose 2.6 percent to $13.53 billion, narrowly missing the average analyst estimate of $13.54 billion.

 

Intel’s Data Centre business flounders

flounder-6001Chipmaker Intel cut revenue growth forecast for its highly profitable business of making chips for data centres claiming that businesses are reducing spending due to weak macroeconomic growth.

Intel has been counting on the data centre business to help offset declining demand for its chips used in PCs and it bought  Altera for $16.7 billion to help out.

Intel now expects data centre business to grow in “low double digits” in 2015, compared with its earlier forecast of about 15 percent growth.

Data centres are Chipzilla’s second biggest area and grew 19.2 percent in the first quarter, 9.7 percent in the second and 12 percent in the latest quarter.

Chief Executive Brian Krzanich insisted that the company was not “rethinking the long-term growth” of the business.

The weak data centre forecast took the shine off from the company’s better than expected profit and revenue in the third quarter.

The company also trimmed its 2015 capital expenditure for the third time to $7.3 billion, plus or minus $500 million.

Intel had previously forecast capital expenditure of $7.7 billion, plus or minus $500 million.

The company said it expected fourth-quarter revenue of $14.8 billion, plus or minus $500 million. The midpoint of the range is a marginal increase from a year earlier. It’s into its pluses and minuses, that INTC

Intel said revenue from its PC business fell 7.5 percent to $8.51 billion in the third quarter ended September 26.

Intel’s net income fell to $3.11 billion from $3.32 billion last year.

Net revenue declined to $14.47 billion from $14.55 billion, but beat analysts’ estimate of $14.22 billion.

Intel not going bust anytime soon

Intel bus - Wikimedia CommonsChipmaking supremo Intel reported better than expected quarterly results after seeing growth in its data centres and internet of things businesses.

That apparently helped offset weak demand for personal computers that use the company’s chips.

Intel shares rose as much as 9.2 percent after market before paring some of their gains, probably as investors twigged that Intel had just cut its full-year capital expenditure forecast for the second time.

Intel  said that it was expanding its line-up of higher-margin chips used in data centres to counter slowing demand from the PC industry and agreed to buy Altera for $16.7 billion in April as part of its cunning plan.

Revenue from the data centres grew 9.7 percent to $3.85 billion in the second quarter from a year earlier, helped by continued adoption of cloud services and demand for data analytics.

Chief Financial Officer Stacy Smith was predicting robust growth rates of the data centre group, Internet of Things group and NAND businesses.

Revenue from the PC business, which is still Intel’s largest, fell 13.5 percent to $7.54 billion in the quarter ended June 27.

“Our expectations are that the PC market is going to be weaker than previously expected,” Smith said.

Research firm Gartner predicted that PC shipments would fall 4.5 percent to 300 million units in 2015, with no respite until at least 2016.

Intel forecast current-quarter revenue of $14.3 billion, plus or minus $500 million. While the cocaine nose jobs of Wall Street were expecting revenue of $14.08 billion.

The company also cut its 2015 capex forecast to $7.7 billion, plus or minus $500 million. It had cut its full-year capex forecast to $8.7 billion from $10 billion in April.

The company’s net profits fell to $2.71 billion from $2.80 billion a year earlier.

Net revenue fell 4.6 percent to $13.19 billion, but edged past the average analyst estimate of $13.04 billion. Intel’s stock fell about 18 percent this year.

Google might have wasted its cash on a quantum computer

Last year boffins were shocked when Google wrote a cheque for $15 million for a quantum computer system called DWave.

Now it turns out that the device may not be all it’s cracked up to be and it might not be a quantum computer after all and Google was not the only one to fall for it.

Aerospace giant Lockheed Martin paid a cool $10 million for the world’s first commercial quantum computer from a Canadian start up called D-Wave Systems. Last year, Google and NASA bought a second generation device for about $15 million with Lockheed upgrading its own machine for a further $10 million.

At the time, the move was heralded as a new era for quantum computation. Particularly when last year Cathy McGeoch at Amherst College in Massachusetts said she’d clocked the D-Wave device solving a certain class of problem some 3600 times faster than a conventional computer.

But now, according to Medium.com,  D-Wave has undergone a dramatic change in fortune.

A report from a team of physicists from IBM’s T J Watson Research Laboratory in Yorktown Heights, NY, and the University of California Berkeley, say that D-Wave’s machine may not be quantum at all.

Umesh Vazirani, one of quantum computing’s early pioneers, pointed out that the method used to define the machine’s “quantumness” did not really work. In fact the tests used could easily be explained with another classical algorithm.

“We outline a simple new classical model, and show that on the same data it yields correlations with the D-Wave input-output behaviour that are at least as good as those of simulated quantum annealing,” he wrote.

In other words if the D-Wave computer was not quantum at all, it would still be capable of producing the same results.

D-Wave can still argue that its machine is quantum but in a way that is not revealed in these tests. But at some point it’ll need to produce evidence to back up this claim and this might be tricky.

What is probably embarrassing for Google, NASA and Lockheed Martin is that they could have shelled out tens of millions for a cryogenically cooled pocket calculator or a potentially dead or alive cat. 

Qualcomm about to have a China crisis

US chipmaker Qualcomm is about to be bitten in the rump with a record $1 billion fine by the Chinese antitrust watchdogs.

China’s National Development and Reform Commission (NDRC) investigated Qualcomm last year and according to Reuters is having a quiet word with the US outfit.

Qualcomm has said it was still in the dark about the basis of the scrutiny, but it seems the NDRC is targeting IT providers which license patent technology for mobile devices and networks.

Cynics claim that the Chinese are using the NDRC to force foreign IT companies to lower domestic costs as it rolls out its faster 4G mobile networks this year.

What the NDRC is doing is trying to force Qualcomm to make all sorts of commitments regarding its technology and the licensing of it.

Qualcomm will make a bomb in licensing fees for the chip sets used by handsets in China, the world’s biggest smartphone market as Chinese telecom firms invest $16.4 billion in equipment for 4G networks.

Under China’s anti-monopoly law, the NDRC can impose fines of between 1 and 10 percent of a company’s revenues for the previous year. Since Qualcomm earned $12.3 billion in China for its fiscal year ended September 29 that could be a serious amount of dosh.

The fine could be even higher if Qualcomm fails to make concessions in its talks with the NDRC.

In December, the head of the NDRC’s anti-price-fixing bureau told state media there was “substantial evidence” against Qualcomm in the antitrust probe. Details, however, remain sketchy.

But there could be a lot more foreign outfits in a little trouble in big China. China’s regulators are trying to target key industries to shield consumers from practices that could lead to what they call “unreasonably” high prices.

In 2011, the agency imposed one of its first major penalties against a foreign company, a $300,000 fine on Unilever Plc for violations of the pricing law.

The NDRC has also slapped Chinese and foreign companies with investigations and fines in the past year. 

Qualcomm talks up the data centre

Qualcomm’s soon-to-be Chief Executive Officer Steve Mollenkopf has been repeating common industry wisdom that the world wants chips in the data centre.

Mobile chip makers have been a little worried of late at the news that 2014 is expected to be a bad year for smartphones and so have been talking about other places to put their chips.

One of the most common things is to place mobile chips in data centres and take advantage of the fact that every business and its dog wants to set up huge data farms for cloud storage.

Speaking at the Consumer Electronics Show in Las Vegas, Mollenkopf had a bit of a problem in that Qualcomm had no specific microserver products to announce, so instead he talked up the outfit’s future direction.

“I think there’s going to be a tremendous amount of growth in computing and resources dedicated to supporting the cloud,” Mollenkopf said.

Some of Mollenkopf’s problem is that he is not really replacing the CEO Paul Jacobs until March so what he is saying might be on his mind now, but there is no guarantee he will have anything for us this year.

So far Qualcomm’s chips have been all targeted at the traditional consumer market. 

Researchers revamp MPC to keep spies out of your data centre

Now that it has been revealed that the NSA has the keys to your data centre, analysts are working out new methods to shut them out.

One of the plans is to develop corporate datacentres that encrypt data beyond the ability of the NSA to crack it.

The idea is to use a new encryption technique that allows data to be stored, transported and even used by applications without giving away any secrets.

The concept was presented by security researchers from Denmark and the UK to the European Symposium on Research in Computer Security.

It looks at a long-discussed encryption concept called Multi-Party Computation (MPC).

MPC allows two parties who have to collaborate on an analysis or computation to do so without revealing their own data to the other party.

The idea has been kicking around since 1982. Ways to accomplish it with more than two parties, or with standardised protocols and procedures was considered impractical.

The Danish/British team have revamped an MPC protocol nicknamed SPDZ, which uses secret, securely generated keys to distribute a second set of keys that can be used for MPC encryptions.

This allows parties on one end of a transaction to verify that they know a piece of information such as a password by offering a different piece of information that could be known only to the other party.

The technique could allow secure password-enabled login without requiring users to type in a password or send it across the internet.

SPDZ was rejected too slow and cumbersome to be practical, but the revamped version seems to work a lot better.

Nigel Smart, professor of cryptology at the University of Bristol streamlined SPDZ by reducing the number of times global MAC keys had to be calculated in order to create pairs of public and private keys for other uses.

By cutting down on repetitive tasks, the whole process becomes much faster. It also keeps global MAC keys secret and makes the faster process more secure.

According to Slashdot the University of Bristol is already working out ways to commercialise the technique. 

Kenneth Brill, datacentre pioneer, dies

The man who is credited with playing an enormous role in shaping the modern data centre has died of cancer. Kenneth Brill was 69.

Brill singled-handedly crafted an industry out of nothing, before him and his UpTime Institute, there was no identity or commonality among data centres.

He saw how it was possible to create an industry that could share and use information to improve operations.

UpTime created tier classifications for comparing data centres. These days people are always talking about tier 1, tier 2, tier 3 data centres.

He was also a big fan of data centre energy efficiency. He wanted to see better communication between IT staff and the engineering teams that run data centres, which he believed was the only way to get the most from operations.

Brill’s last major statement was following Amazon’s prolonged outage. He warned that the concentration of computing resources with large cloud providers was putting people who champion internal reliability at a disadvantage.

He thought that cloud providers were facing similar problems to Japan’s Fukushima nuclear power plant – in that it will work fine until a following disaster has enormous consequences.

Brill predicted that there will be more failures than we have been seeing, “because people have forgotten what we had to do to get to where we are.” 

Woz turns into Mystic Meg

Apple co-founder Steve Wozniak has been shuffling his Tarot cards and looking into his crystal balls to predict what will happen in 2013.

Rather than the world ending on Friday, Woz sees an important year with all the news being about data centre technologies.

According to Forbes,  Woz believes that mobile devices will eventually become the “remote controls” of the world and data centre technologies will be to 2013 what the cloud will be to 2012.

He said more focus will be paid to enterprise data center technologies because of the improving ability to use the data centre proactively.

There will be a rapid move from hard disk to NAND flash memories in the data centre, which is drastically improving performance, reliability and the ability to distribute everything through virtual machines, said Woz. He thinks this will lead to the de-centralisation of cloud services.

Woz said that enterprises with different offices in multiple cities will run the same cloud services out of each office and have the cloud services talk to each other to ensure synchronisation, improving overall efficiency.

For all this to work, consumer technologies will have to control the direction of the enterprise, with individual hardware, software and service components influencing the direction of other bits.

This will mean that the Bring Your Own Device trend will gain a lot more traction.

He also claims businesses will learn that it is worth investing resources to provide a wide range of platforms and devices they provide to increase morale and improve efficiency.

‘Choice’ is happiness and will become a motivational tool enterprises use to their advantage, he added.

Woz thinks that mobile devices will become local repositories for information shared between owners, who’ll use those devices to control their environment.

This will lead to a move towards flexible displays, which are installed everywhere.

Woz thinks that we are reaching a tipping point where mobile devices will increasingly become our remote controls to the world.

He said that humanity will carry software on mobile devices, but display it on these communal screens, including those installed in conference rooms.

To do this software will also be designed to display on any size.