Category: Chips

Qualcomm releases new mobile modem

qualcomQualcomm has launched what it claims will be the next big thing in LTE modem speed – a Gigabit Class LTE Chipset with download speeds of up to 1 Gbps.

Dubbed the X16 it will replace the X12 is part of Snapdragon 820 or standalone chip. The X12 can only manage 600 Mbps download and 150 Mbps upload when the wind is behind it.

The reason everyone is getting moist about the X16 is because it is probably going to end up in the iPhone 8. The iPhone 7 will have an X12 class modem.

But more importantly it will be included in the successor to Snapdragon which should appear in the shops around late February 2017.

The X16 modem supports the Snapdragon All Mode including Licensed Assisted Access (LAA) and LTE-U. This is Qualcomm’s first 14nm modem based on the FinFet process and it will replace the 9×45 and 9×40 modems which are based on Cat 12 / Cat 13 upload speeds and 20nm manufacturing process.

The new X16 modem is category 16. This specification defines download speeds of 1 Gbps and upload up to 150 Mbps.
Cristiano Amon, executive vice president, Qualcomm Technologies, Inc., and president, QCT said

“In addition to serving as a significant milestone for the mobile industry, the Snapdragon X16 LTE modem is a powerful testament to Qualcomm Technologies’ continued technology leadership in all things wireless. Not only does the Snapdragon X16 blur the lines between wired and wireless broadband, but marks an important step toward 5G as we enable deeper unlicensed spectrum integration with LTE and more advanced MIMO techniques to support growing data consumption and deliver an even faster and smoother user experience.”

Qualcomm managed to reach 1 Gbps using carrier aggregation and 4×4 MIMO. The Snapdragon X16 LTE modem can receive 10 unique streams of data using only three 20 MHz carriers.

Overclocker nearly breaks record with Skylake

IntelAn overclocker has broken a significant records by fiddling around with the speed of Intel’s Skylake chips.

Chi-Kui Lam got together an ASRock motherboard, G.SKILL memory, and a beefy 1.3KW Antec power supply a ton of liquid nitrogen and take the chip through the 7GHz barrier to settle in at 7025.66MHz.

The clock speed is superfast and the CPU-Z screenshot shows us that all cores but one were disabled to make it go faster.

However the highest overclock seen comes from an AMD FX-8370 to a ridiculous 8,722.78MHz by an overclocker called The Stilt.  That feat didn’t require the disabling of cores as all 8 were left functional. Back in October of 2013, WYTIWX submitted a result of 8,543.71MHz with his Intel Celeron 352.

However Chi-Kui Lam’s record is important because Intel’s Skylake has not been out long and chip design has changed in such a way to make overclocking beyond 7GHz tricky.



Apple sued for creating bricks in its wall

pink-floyd-the-wall-alan-parkerThe fruity cargo cult Apple is facing a class action for bricking the phones of naughty users who dared not to use its genii to repair their phones.

Apple issued an update which bricked iPhone 6 devices that have been repaired by third parties. The so-called “Error 53” problem appeared after an iOS software update and seemed to affect devices with replaced or damaged home buttons and Touch ID sensors.

Basically this means that anyone dumb enough to buy an iPhone 6 but shrewd enough to realise that it did not have to be repaired by Apple is facing an expensive problem. They have to get their phone re-repaired by Apple for the original fault and the one that Jobs’ Mob created.

Apple claims that it has done no such thing and the error is the result of a “security feature.” The iOS checks that the Touch ID sensor matches your device’s other components during an update or restore. This check keeps your device and the iOS features related to Touch ID secure.”

We are not sure if they ran that particular excuse through the common sense department of the company for checking. Why would Touch ID need to do that?

The idea is similar to one which Microsoft uses to check that you are not installing multiple copies of Windows onto your PC and with a similar but less drastic result. To say that Apple was unware that this would be the result of its “security feature” suggests that either its software people are so stupid that they did not know that would happen, or the company was trying to weed out those who had “unauthorised components”.

PCVA, the Seattle-based law firm considering a class action lawsuit thinks the latter. On its website it said: “We believe that Apple may be intentionally forcing users to use their repair services, which cost much more than most third-party repair shops.”

Intel ready to sacrifice power for energy efficiency

Intel's Gordon Moore and Robert NoyceChipzilla is ready to embrace alternatives to the speed at all cost ethos which has made it so successful.

William Holt, who leads the company’s technology and manufacturing group, said this week that for chips to keep improving, Intel will soon have to start using fundamentally new technologies.

Holt pointed to two possible candidates – tunnelling transistors and spintronics. Both would require big changes in how chips are designed and manufactured, and would likely be used alongside silicon transistors.

What is important is that the technology will not offer speed benefits over silicon transistors and chips may stop getting faster. The new tech would improve the energy efficiency of chips, something important for many leading uses of computing today, such as cloud computing, mobile devices, and robotics.

“We’re going to see major transitions,” said Holt, speaking at the International Solid State Circuits Conference in San Francisco. “The new technology will be fundamentally different.”

Holt said that the status quo can only continue for two more generations, just four or five years, by which time silicon transistors will be only seven nanometres in size.

Tunnelling transistors are far from commercialization, although DARPA and industry consortium Semiconductor Research Corporation are funding research on the devices. They take advantage of quantum mechanical properties of electrons that harm the performance of conventional transistors and that have become more problematic as transistors have got smaller.

Spintronic devices are doable and could hit the market next year. They represent digital bits by switching between two different states encoded into a quantum mechanical property of particles such as electrons known as spin.

Spintronics will appear in some low-power memory chips in the next year or so, perhaps in high-powered graphics cards.

For example, Toshiba announced last year that it had developed an experimental spintronic memory array that consumed 80 percent less power than SRAM, a type of high-speed memory.

Holt claimed that continued gains in energy efficiency, not raw computing power, are most important for the things asked of computers today.

“Particularly as we look at the Internet of things, the focus will move from speed improvements to dramatic reductions in power,” Holt said. Power is a problem across the computing spectrum. The carbon footprint of data centres operated by Google, Amazon, Facebook, and other companies is growing at an alarming rate. And the chips needed to connect many more household, commercial, and industrial objects from toasters to cars to the Internet will need to draw as little power as possible to be viable,” he said.

MIT comes up with deep learning for mobile

mybrainhurtsMIT researchers have emerged from their smoke filled labs with a new chip which can provide mobile gear deep learning properties.

At the International Solid State Circuits Conference in San Francisco this week, MIT researchers presented a new chip designed specifically to run mobile neural networks. The chip is 10 times as efficient as a mobile GPU and means mobile devices could run powerful AI algorithms locally.

Vivienne Sze, an assistant professor of electrical engineering at MIT whose group developed the new chip said that deep learning was useful for many mobile applications including object recognition, speech, face detection.

“Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

Dubbed Eyeriss, the new chip could be useful for the Internet of Stuff. AI armed networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

Sze and her colleagues used a chip with 168 cores, roughly as many as a mobile GPU has.


Eyeriss’s minimized the frequency with which cores need to exchange data with distant memory banks, an operation that consumes time and energy. The GPU cores share a single, large memory bank and each Eyeriss core has its own memory. The chip has a circuit that compresses data before sending it to individual cores.

Each core can communicate directly with its immediate neighbours, so that if they need to share data, they don’t have to route it through main memory.

The final key to the chip’s efficiency is special-purpose circuitry that allocates tasks across cores. In its local memory, a core needs to store not only the data manipulated by the nodes it’s simulating but data describing the nodes themselves. The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximizes the amount of work that each of them can do before fetching more data from main memory.

At the conference, the MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

Sony to buy Altair

sonySony is to write a cheque to buy the Israeli chipmaker Altair Semiconductor for $212 million.

The move is widely seen as the PS4 maker stepping up its investment in chip technology after strong sales of camera sensors in the last few years helped turnaround the business.

Altair Semiconductor has developed technology to allow small devices such as security alarms and electricity meters to connect to mobile networks, told Reuters last year that it was considering an initial public offering.  The sudden buy out then is surprising but a sensible move for Sony.

There has been a lot of consolidation in the chip industry of late and it is fast getting difficult to find someone to consolidate with.  It is thought that those who can merge with other companies have already done so and will spend the rest of this year restructuring ready for next year.

Sony said it expects to close the deal in early February.

Toshiba flogs part of its chip biz

ToshibaJapan’s troubled Toshiba plans to sell part of its chip business as it aims to recover from a $1.3 billion accounting scandal.

Early interest in the sale has been shown by the Development Bank of Japan as the state-owned bank has already invested in Seiko’s semiconductor operations.

The sale would exclude Toshiba’s mainstay NAND flash memory operations which are still doing rather well.

Tosh is flogging its businesses that handle system LSI and discrete chips, which are widely used in cars, home appliances and industrial machinery. However they lost $2.78 billion in the year ended March 2015.

Toshiba has been focusing on nuclear and other energy operations, as well as its storage business, which centers on NAND flash memory chips.

Tosh wants to invest heavily in its flash memory production capacity in Japan to better compete with Samsung.


Samsung mass produces HBM2 4GB DRAM

Samsung DRAMSamsung has begun mass producing the industry’s first 4-GB DRAM package based on the second-generation High Bandwidth Memory (HBM2) interface.

The DRAM is headed for use in high performance computing (HPC), advanced graphics and network systems, as well as enterprise servers.

The new DRAM is more than seven times faster than the current DRAM performance limit which means super-fast responsiveness for high-end computing tasks including parallel computing, graphics rendering and machine learning.

In a statement Samsung’s Sewon Chun, senior vice president, Memory Marketing, Samsung Electronics said that by mass producing next-generation HBM2 DRAM, the outfit can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies.
“Using our 3D memory technology here, we can more proactively cope with the multifaceted needs of global IT, while at the same time strengthening the foundation for future growth of the DRAM market.”

After all who doesn’t need their multifaceted needs proactively coped?

The 4GB HBM2 DRAM uses Samsung’s most efficient 20-nanometer process technology and advanced HBM chip design, satisfies the need for high performance, energy efficiency, reliability and small dimensions making it well suited for next-generation HPC systems and graphics cards.

The 4GB HBM2 package is created by stacking a buffer die at the bottom and four 8-gigabit (Gb) core dies on top. These are then vertically interconnected by TSV holes and microbumps. A single 8Gb HBM2 die contains over 5,000 TSV holes, which is more than 36 times that of a 8Gb TSV DDR4 die, offering a dramatic improvement in data transmission performance compared to typical wire-bonding based packages.

Samsung’s new DRAM package features 256GBps of bandwidth, which is double that of a HBM1 DRAM package. This is equivalent to a more than seven-fold increase over the 36GBps bandwidth of a 4Gb GDDR5 DRAM chip, which has the fastest data speed per pin (9Gbps) among currently manufactured DRAM chips. Samsung’s 4GB HBM2 also enables enhanced power efficiency by doubling the bandwidth per watt over a 4Gb-GDDR5-based solution, and embeds ECC (error-correcting code) functionality to offer high reliability.

In addition, Samsung plans to produce an 8GB HBM2 DRAM package within this year. By specifying 8GB HBM2 DRAM in graphics cards, designers will be able to enjoy a space savings of more than 95 percent, compared to using GDDR5 DRAM, offering more optimal solutions for compact devices that require high-level graphics computing capabilities.

TSMC close to 10nm tape out

TSMC fab in Hsinchu - Wikimedia CommonsTSMC has confirmed that it will tape out its first products based around a 10nm process within the next few months and will transition to 7nm by 2018 and 5nm by 2020.

During the recent announcement of its results TSMC let slip that it thinks it has cracked 10nm and will tape out its first 10nm parts within this quarter.

This means that it will have beaten Intel as the outfit delayed its 10nm node until 2017.  It might have gained a few months on Samsung which has declared that volume production of its own 10nm process node parts will begin before the end of the year.

CEO Mark Liu wanted to have 7nm parts in 2018 and 5nm – which will require a switch to extreme ultra-violet lithography (EUV) by 2020.  It needs EUV as the size of the gaps in the lithographic masks becomes too small for other forms of light to pass cleanly through.

Microsoft will not support Skylake in Windows 7 and 8

Windows 10Software Giant Microsoft has come up with a cunning plan to get those people who want to run Intel’s Skylake to upgrade to Windows 10.  It says that it is only supporting the new CPU for another 18 months.

Vole has announced that it will cease official support for devices running Intel’s 6th generation Skylake on Windows versions from 7, all the way to 8.1. It all seems to us to be part of a cunning plan to get users to upgrade to Windows 10.

Chipzilla probably does not mind because if they upgrade they are more likely to buy Intel chips, although if they have just splashed out on Skylake chips they are not going to want a new one for a while

Microsoft’s Terry Myerson said said that Windows 7 was designed nearly 10 years ago before any x86/x64 SOCs existed.

“For Windows 7 to run on any modern silicon, device drivers and firmware need to emulate Windows 7’s expectations for interrupt processing, bus support, and power states- which is challenging for WiFi, graphics, security, and more,” he claimed.

“As partners make customizations to legacy device drivers, services, and firmware settings, customers are likely to see regressions with Windows 7 ongoing servicing.”

Myerson said that Windows 10 works rather well with Skylake chips, though it’s mostly geared towards mobile and laptops, flaunting battery life savings and graphical performance increases.

To be fair the chances of someone running an Intel Skylake chip and using Windows 7,8, or 8.1 are slim but it is possible that corporate PCs/notebooks are only supported on Windows 7 because that is what the company runs.

Microsoft said it will publish a list of devices running Skylake and Windows 7/8/8.1 that will be exempted from this rule and will instead remain supported through July 2017.

What we do not really understand is why Microsoft is so obessed with getting people to upgrade. With its nagware and other strategies it is really hacking people off and does not seem to care.