Category: Graphics cards

AMD has a smaller Fury

small furyAMD has added a third card to its Fury line that’s is much smaller than its siblings.

Dubbed the Radeon R9 Nano, the card is compact – 6-inches of Fiji GPU core built on a 28nm manufacturing process paired with 4GB of High Bandwidth Memory (HBM).

The card is 1.5 inches shorter than the Fury X, and unlike its liquid cooled sibling, there’s no radiator and fan assembly to mount.

AMD wants the world to see its R9 as the fastest mini ITX graphics card and space in those sorts of builds is thin on the ground.

The Nano has 64 compute units with 64 stream processors each for a total of 4,096 stream processors, just like Fury X. There are 256 texture units and 64 raster operations pipelines (ROPs), and with an engine clock of up to 1,000MHz.

All this means that the Radeon R9 Nano can do 8.19 TFLOPs of compute performance, which is pretty floppy. That is close to the Fury X, which features a 1,050MHz engine clock and do 8.6 TFLOPs when the wind is behind it.

The Nano uses 175W which is 100W lower than Fury X at 275W. According to AMD, the Nano offers up to twice the performance per watt as its previous generation Radeon R9 290X.

A single fan blows air over a large heatsink with densely packed aluminium fins. There’s also a dual vapour chamber block and heat pipes that run throughout, along with a dedicated heat pipe and heatsink for the voltage regulator.

AMD tells us the card is being “library quiet” with a claimed noise output of just 16 dBA.

AMD is claiming its Radeon R9 Nano isbetter than Nvidia’s GeForce GTX 970 in mini ITX form at 4K resolution in several popular titles. It even manages to hit 60 FPS in Grand Theft Auto V.
The Radeon R9 Nano will be available the week of September 7, 2015, for $649 MSRP.

Nvidia releases Maxwell for the great unwashed

maxwells-hammerGraphics card maker Nvidia has released a second generation Maxwell card which has the novelty that it is affordable.

The catchy titled Geforce GTX 950 will hit the shops with a US $159.99 price tag. It is meant to replace the GTX 750 Ti graphics card which is well past its sell by date.

The Geforce GTX 950 is based on the same second generation Maxwell GPU as the Geforce GTX 960 but its GM206 GPU has been cut-down to 6 SMMs, for a total of 768 CUDA cores, 48 Texture Units and 32 ROPs.

The GM206 on the GTX 950 is paired with 2GB of GDDR5 memory running on a 128-bit memory interface.

The reference clocks provided by Nvidia, are set at 1024MHz for the GPU base and 1188MHz for the GPU Boost clocks while memory is clocked at 6.6GHz.

The new Geforce GTX 950 is mostly meant for MOBA gaming. Nvidia markets the Geforce GTX 950 as the card that can provide 60FPS gaming and can be be super responsive with low latency and provide DirectX 12 and MFAA support in MOBA gaming.

The Geforce GTX 950 can run plenty of games at 1080p resolution, including the Witcher 3 : Wild hunt, GTA V, Battlefield: Hardline, Call of Duty: Advanced Warfare and FarCry 4.

The Geforce GTX 950 has a higher 90W TDP, compared to 60W on the GTX 750 Ti and it needs a single 6-pin PCI-Express power connector to work.  This is a pain as the older card could be used just by plugging into the motherboard.

Intel spills the beans on Skylake’s graphics

Sky-and-lakeIntel has been releasing a few details about what is under the bonnet of its Skylake integrated graphics system.

A white paper on Intel’s website detailing the compute architecture of Intel’s “Gen9” graphics. It’s an update to the Gen8 whitepaper which can be found in all its glorious whiteness here .

As with previous Core processors, Skylake uses a System-on-Chip (SoC) architecture.

“Intel 6th generation Core processors are complex SoCs integrating multiple CPU cores, Intel processor graphics, and potentially other fixed functions all on a single shared silicon die,” the white paper says.

“The architecture implements multiple unique clock domains, which have been partitioned as a per-CPU core clock domain, a processor graphics clock domain, and a ring interconnect clock domain. The SoC architecture is designed to be extensible for a range of products, and yet still enable efficient wire routing between components within the SoC.”

Ring topology is an on-die bus between CPU cores, caches, and graphics. It’s bi-directional and 32-bytes wide with separate lines for different tasks. All off-chip system memory transactions going to and from the CPU cores and to and from the graphics portion, are facilitated by the ring interconnect.

However what Skylake has is a coherent SVM write performance is significantly improved via new LLC cache management policies. L3 cache capacity has been increased to 768 Kbytes per slice, that’s 512 Kbytes for application data.

The sizes of both L3 and LLC request queues have been increased and EDRAM now acts as a memory-side cache between LLC and DRAM. The EDRAM memory controller has moved into the system agent, adjacent to the display controller, to support power efficient and low latency display refresh.

Texture samplers now natively support an NV12 YUV format for improved surface sharing between compute APIs and media fixed function units.

Gen9 adds native support for the 32-bit float atomics operations of min, max, and compare/exchange. Also the performance of all 32-bit atomics is improved for kernel scenarios that issued multiple atomics back to back.

The chip’s 16-bit floating point capability is improved with native support for denormals and gradual underflow.

Gen9 adds new power gating and clock domains for more efficient dynamic power management. This can particularly improve low power media playback modes.

Skylake’s graphics execution unit (EU) is similar to the Gen8 design. Each Gen9 EU has seven threads to work with, each of which features 128 general purpose registers. Each of those registers can store 32 bytes accessible as a SIMD 8-element vector of 32-bit data elements.

It will be Intel’s fastest integrated Intel HD graphics solutions to date and does have a chance of competing with lower end graphics cards.

Nvidia makes surprise profit

surprised-newspaper-readerNvidia has reported a surprise rise in second quarter revenues and gave a better than expected revenue forecast for the current quarter.

It was all down to a strong demand for its graphic chips used in gaming and cars.

Nvidia’s revenue increased 4.5 percent in the quarter ended July 26, while the cocaine nose jobs of Wall Street had predicted revenue to decrease about eight percent.

The outfit predicted sales would fall in the third quarter, but the decline was less than Wall Street expected.

Nvidia gets most of its cash from its graphic chips made for PCs. There were fears the fall in PC sales would hurt Nvidia just like it did Intel and AMD.

The outfit said gaming revenue rose 59 percent, helped by strong sales of its GeForce series of gaming chips.

Nvidia has also been making chips that allow people to play graphics heavy games over the internet and chips used in a car’s dashboard display and in self driving cars.

Automotive revenue rose 76 percent and accounted for only 6.2 percent of total revenue.

The company said eight million cars on the road were using its chips and that it was working with more than 50 companies for its DRIVE chip for self driving cars.

Revenue in Nvidia’s enterprise business fell 14 percent. The business, which makes chips used for software such as AutoCAD, accounted for 16.2 percent of total revenue.

The outfit’s total revenue rose 4.5 percent to $1.15 billion in the second quarter. Analysts had expected $1.01 billion.

Net income fell nearly 80 per cent to $26 million in the quarter because of higher costs and a bigger tax bill.

Nvidia forecast revenue to fall to $1.16-$1.20 billion in the current quarter from $1.23 billion a year earlier. Analysts had expected a fall to $1.10 billion.

Doom for AMD coming

Dads Army Frazier - doomedThings are getting worse for the troubled chipmaker AMD which has just come up with a mixed set of results just when it needs to reassure everyone it is not going down the gurgler.

For the first time its revenues were below the billion mark at $942 million. AMD did warn us that things were going to be gloomy. Last week it said that results would be flat. The actually figure was an 8.5 per cent sequential decline and a 34.6 per cent drop from the same period a year ago. Flat apparently means a gentle incline.

It is not entirely AMD’s fault. The company needs PC sales and these are not happening. In fact IDC and its ilk don’t see things picking up for a while.

AMD’s Computing and Graphics division reported revenue of $379m, which was down 54.2 percent, year-on-year. Its operating loss was $147 million, compared to a $6 million operating loss for last year’s equivalent quarter.

Lisa Su, AMD president and CEO, in a statement said that strong sequential revenue growth in AMD’s enterprise, embedded, and semi-custom segment and channel business was not enough to offset “near-term” challenges in its PC processor business. This was suffering due to lower than expected consumer demand that impacted sales to OEMs, she said.

“We continue to execute our long-term strategy while we navigate the current market environment. Our focus is on developing leadership computing and graphics products capable of driving profitable share growth across our target markets,” she added.

In the semi-custom segment, AMD makes chips for video game consoles such as the Nintendo Wii U, Microsoft Xbox One, and Sony PlayStation 4 consoles. That segment did reasonably well, up 13 percent from the previous quarter but down eight percent from a year ago.

But AMD’s core business of processors and graphics chips fell 29 percent from the previous quarter and 54 percent from a year ago. AMD said it had decreased sales to manufacturers of laptop computers.

AMD does expect the third quarter to be better with an increase in revenue by six percent. Windows 10’s arrival should is expected to push a few sales its way.

AMD Fury exposed

nick furyAMD leaked some news on its Fiji range and confirmed specifications for what will be AMD’s most powerful graphics card to date.

According to Hot Hardware, Fiji will initially be available in both Pro and XT variants with the Fiji Pro dubbed “Fury” and Fiji XT being dubbed “Fury X”.

The standard Fury is slightly neutered compared to the Fury X, given its pared down stream processors, compute units, and texture mapping units. But when all is said and done, the garden variety Fury still touts single-precision floating point (SPFP) performance of 7.2 FLOPS compared to 5.6 TFLOPS for a Radeon R9 290X. That’s a roughly 29-percent performance improvement, which is nothing to scoff at in the graphics world.

The Fury X with its 4096 stream processors, 64 compute units, and 256 texture mapping units manages to deliver 8.6 TFLOPS, or a 54-percent increase over a Radeon R9 290X.

Both the Fury and Fury X dissipate heat via three axial fans. The water-cooled Fury X shares the same physical hardware specs as the air cooled Fury X, but will see its GPU clock increased by an unspecified amount, which should allow it to deliver slightly more than 8.6 TFLOPS.

What is looking sexy, for those who need to get out more, is AMD’s High Bandwidth Memory (HBM) interface, which is stacked vertically, decreasing the PCB footprint. It is integrated directly into the same package as the GPU/SoC, leading to further efficiencies, reduced latency and a blistering 100GB/sec of bandwidth per stack (4 stacks per card).

HBM is claimed to deliver three times the performance-per-watt of GDDR5 memory.

 

Fury X
(Water Cooled)
Fury X
(Air Cooled)
Fury
(Air Cooled)
R9 290X
GPU Fiji XT Fiji XT Fiji Pro Hawaii XT
Stream Processors 4096 4096 3584 2816
GCN Compute Units 64 64 56 44
Render Output Units 128 128 128 64
Texture Mapping Units 256 256 224 176
GPU Frequency ≥ 1050Mhz 1050Mhz 1000Mhz 1000Mhz
Memory 4GB HBM 4GB HBM 4GB HBM 4GB GDDR5
Memory Interface 4096bit 4096bit 4096bit 512bit
Memory Frequency 500Mhz 500Mhz 500Mhz 1250Mhz
Effective Memory Speed 1Gbps 1Gbps 1Gbps 5Gbps
Memory Bandwidth 512GB/s 512GB/s 512GB/s 320GB/s
Cooling Liquid, 120mm Radiator Air, 3 Axial Fans Air, 3 Axial Fans Air, Single Blower Fan
Performance (SPFP) ≥ 8.6 TFLOPS 8.6 TFLOPS 7.2 TFLOPS 5.6 TFLOPS
TDP 300W 300W 275W 290W
GFLOPS/Watt ≥ 28.7 28.7 26.2 19.4

 

Sapphire Radeon HD6990 4GB reviewed

Here’s a question for you. What’s huge, blisteringly fast, very loud, needs its own power station and is not made by Nvidia? The answer is AMD’s latest incarnation of the Northern Island range of GPUs, the Antilles, or to give it its marketing name, the Radeon HD6990. Oh, and incidentally, it’s currently the fastest single card on the planet.

When the HD6*** series was launched, AMD caught everyone on the hop by tinkering about with the product numbering system so that the two original launch cards the HD6870 and HD6850 weren’t direct replacements for the HD5870 and HD5850 – but because of the changes in the core architecture it allowed AMD to have cards with the same level of performance but at a much better price point.

Thankfully the HD6990 isn’t about wishy washy stuff like trying to get the right amount of bang for your buck, especially when you see the price of the thing, no – this card is all about AMD trying to regain the king of the hill title for the fastest graphics card back from Nvidia, well, for a single card at least.

To get its frame crunching performance the HD6690 uses two, yes, two slightly down tuned high-end Cayman XT cores as found in the HD6790 cards. Both the core and memory clocks have been turned down in the HD6990, or at least out of the box as standard they have, but more on that later. The core and the 3072 stream processors fly along at 830MHz, down from 880MHz while the 4G of GDDR5 memory is clocked at 1,250MHz (5GHz effective) instead of the Cayman XT’s normal 1,375MHz (5.5GHz effective).

The reason for its huge size – it measures a whopping 305mm (12in) in length, about the same as the previous generation’s 5970 – is mainly down to trying to keep the two cores cool and for this it uses not one, but two Vapor Chamber heatsinks. One over each core separated by a very loud central fan.

To deliver power to the GPU and memory more effectively than its predecessor, the cores are moved further apart on the HD6990 than the HD5970 with the gap between them filled with power regulators and the PCI-E 2.0 bridging chip.

On top of the card there are two 8-pin PCI-E power connectors so you know that this baby consumes a fair amount of power. That’s a bit of an understatement, as the HD6990 has been designed from the ground up to run at 450Watts. Out of the box it still needs 375Watts to get it going, the higher figure comes into play should you decide to ignore the warranty warning sticker.

What’s this about a warranty warning sticker then? One of the features of the more recent Northern Island based cards is the fitting of a two way BIOS switch on the top of the PCB, which is handy if you like to tinker with the graphics card BIOS to make it run faster and muck things up badly, at least you have an option to get the card back to running normally.

With the HD6990 however, things are a bit different as the switch allows you to either run the card at the stock 830MHz of the HD6990 or at the normal speed of the Cayman XT core, that is to say 880MHz, which necessitates an increase in the voltage being fed to the chip.

So we have a factory fitted overclocking option, great, but hold on, what’s this yellow warning sticker on the switch? It’s a warning about voiding the warranty of the card by overclocking. It turns out we have a factory overclocked card that shouldn’t be overclocked as chances are it will void the warranty – go figure.

As you might expect for one of the first cards to see the light of day, Sapphire’s HD6990 is a reference design but with a new sticker on the cooler, but there has been some thought given about the cable bundle that comes with the card as it has an unusual arrangement of ports on the back panel.

There’s single DVI and four mini-DisplayPorts, so Sapphire bundles in all you need to get going with a multi panel EyeFinity setup; a DVI to VGA adaptor, passive miniDP to DP, miniDP to SL-DVI and miniDP to HDMI dongles and an active DP to SL-DVI dongle. In Eyefinity mode, the HD6990 can currently support up to five screens, but when the DisplayPort 1.2 drivers eventually surface you will be able to daisy chain additional DisplayPort 1.2 monitors to each DP output.

To test, we’re using an Intel Core i7 Extreme 965 3.2GHz processor and 6GB of Crucial PC3-10600 DDR3 which sits in an MSI X58 Pro motherboard, together with a Western Digital WD1500ADFD Raptor hard drive with Windows 7 Ultimate 64-bit installed and a Be Quiet 850 power supply.

To test the DirectX 11 capablilites of the card we used the DiRT2 game and Futuremark’s latest 3DMark benchmark, 3DMark11, while the other two 3DMark Vantage and FarCry2 are DirectX10 only.

Benchmarks
3DMark Vantage

Far Cry 2

Far Cry 2 frames per second in Sapphire Radeon HD6990 4GB

3DMark 11

3DMark 11 overall score with the Sapphire Radeon HD6990 4GB

DiRT2

DiRT2 framerate tested with the Sapphire Radeon HD6990 4GB

Performance-wise the HD6990 is a monster, it kicks sand in every other single card’s face and makes a decent fist of moving into dual card territory even at the default clock settings out of the box.

Building two of these beasties into a Crossfire set-up should lead to some awe-inspiring frame rates, that is if you’ve got over a grand to spend, have a huge full tower case and have a local power station to plug into.