Author: Paul Taylor

Here comes our Intel Core i7 3770K review

Here is our review of the Core i7-3770K processor, Intel’s highest-end Ivy Bridge-based processor. There’s a lot to be discussed about it, but we’ll start from the top.


Sandy Bridge exits the scene

Sandy Bridge was a tock in Intel’s design and manufacturing tick-tock strategy. A ‘tock’ is usually a new architecture on a mature process and, as such, normally results in better yields and much better revenues for the manufacturer. Ivy Bridge, its successor, is a tick. It’s a new process on a slightly tweaked architecture so carries with it a risk of lower yields.

Sandy Bridge was a successful move for Intel, in particular in the processor graphics department. The 32nm-built processor fully integrated the graphics core and improved GPU performance over its stepdad, Clarkdale. It introduced an extended instruction set named AVX, video hardware encoding features and an optimized branch prediction amongst other improvements. The now-famous K-series offered unlocked multipliers and some serious overclocking headroom, which proved to be an enthusiast’s delight. It has proved to be a very successful design and was bound to be hard for Intel to do better.

As Sandy Bridge bows out of the market, you’ll see boxes and boxes at heavily discounted prices right now. The brutal slashing began a week before the launch, emptying shelves and making room for the shiny new toy to come. 


Ivy Bridge arrives, not late, not early

Lo and behold, the Ivy Bridge, Intel’s 3rd generation Intel Core processor with processor graphics (as the chipmaker calls it). Not really late to the party, nor early, just on time considering it is Intel that’s pushing the market forward. Despite rumours of delays and a bit of crossed lines between some Intel execs, the CPUs officially launched this Monday.

As of now, Intel is introducing 14 new Ivy Bridge-based SKUs. These include one mobile extreme edition, four standard mobile versions, five desktop and four low-power ones. In order, these are:

Core i7-3920XM, Core i7-3820QM, Core i7-3720QM, Core i7-3612QM, Core i7-3610QM, Core i7-3770K, Core i7-3570K, Core i5-3570K, Core i5-3550, Core i5-3450, Core i7-3770T, Core i7-3770S, Core i5-3550S and the Core i5-3450S.

The 3 prefix in the numbering is the generation, ie: 3rd generation Core processors, while the rest of the number represents the model itself. The letter suffixes represent variants. K for multiplier unlocked, S models are low power and T models are ultra-low power. You can see below the full spec sheets.


Intel Ivy Bridge desktop CPUs

Low Power Ivy Bridge CPUs

Mobile Ivy Bridge CPUs

Some facts about Ivy Bridge

Ivy Bridge is the successor to Intel’s Sandy Bridge microarchitecture. It isn’t a completely new design, but a spin on its predecessor, built on a smaller process and introducing a few new tweaks to the original recipe… some of them more than just pure performance tweaks. Still, we need to state some facts about Ivy Bridge, even before we start the testing. There are two parts to the Ivy Bridge architecture that need focusing on.

First of all, Intel proudly parades Ivy Bridge as the first 22nm “3D” (ie: tri-gate) transistor-based processor. Yes, 3D is all the rage even on CPUs. Simply put this means it’s stacking the gates on its transistors keeping current leakage down (allowing Intel to scale its CPUs to 22nm and beyond) as well as providing some valuable space savings. Transistors built on this 22nm process also require less power, which has amounted to some substantial power savings on the CPUs.

Ivy Bridge integrates a more advanced graphics core onto the die, the HD 4000, a DirectX 11 (ie: hardware tessellation), DirectCompute capable part, which now shares the CPUs own L3 cache. The Intel HD 4000 processor graphics features 16 Execution Units (let’s call them shaders), Clear Video Technology (to offload video decode) and Quick Sync Video, which is hardware based encoding and decoding, which, we’ll see, works quite well. Intel claims up to twice the performance of the graphics in its Sandy Bridge predecessor.

Ivy Bridge is a 1.4 billion transistor processor with a die size of just 160mm2, by comparison, Sandy Bridge had 1.16 billion transistors and a die size of 216mm2. Despite a higher transistor count, the more efficient power design of the 22nm “3D transistors” still rack up the power savings from 95W on the 2700K to 77W on the 3770K. You can see the “processor graphics” die area has become considerably larger than its predecessor

Sandy Bridge Die, labeled

Ivy Bridge die, labeled

Sandy Bridge (on top) and Ivy Bridge (below), you can see that the processor graphics element has swollen up considerably in the latter.

The new architecture comes hand in hand with a new chipset family, the 7 series, codenamed Panther Point. This chipset is compatible with both Sandy Bridge and Ivy Bridge, but not first generation Core products.

Intel supplied a DZ77GA-70K motherboard which is powered by the Z77 chipset and was launched a week prior to the Ivy Bridge release. The DZ77GA-70K, as most Intel motherboards, have all the shiny LEDs and the looks of a deadly killer, but is very tame when it comes to overclocking and basically stepping out of bounds, even though its EFI BIOS is one of the best we’ve seen to date. From system monitoring to dialing up the clock on the CPU, it’s all dead simple. Our overclocking experiments with the motherboard yielded a humble 1.4GHz overclock (3.5 to 4.9GHz), that we are sure was too easy to achieve, yet too hard to overtake on this particular motherboard – something Asus or Gigabyte will pick up and take to the next level. Still the EFI BIOS is gorgeous and simple to use.

The 7-series chipset includes Intel Rapid Storage Technology 11, USB 3.0, Thunderbolt support, SATA 3.0, PCIe 3.0 and up to 3 independent displays (depending on configuration). It’s what the 6-series could have been, in essence.

7 Series Chipset Overview



Our Engineering Sample Core i7 3770K is the counterpart to Intel’s Core i7 2700K Sandy Bridge, both clocked at 3.5GHz and both sport four cores / eight threads. Both have the same Turbo Boost speed of 3.9GHz and both are in the lab for our Apples to Apples comparison. Intel promised something in the vicinity of 7/15 percent pure CPU performance increase, and almost twice as much in “media” processing, thanks to its new graphics core, so let’s see what we get.

We’ll begin with a few CPU benchmarks. We aren’t holding our breaths on this, to be fair, Ivy Bridge didn’t introduce any revolutionary new magic tricks.

Cinebench R11.5 score

In Cinebench R11.5, the HD 4000 GPU is clearly marking the difference. The 3770K pulls ahead of its predecessor by a comfy margin.

Passmark Int/FPU score

Passmark is a simple fire’n’forget benchmark that assesses PC performance on several levels. We’ve focused on FPU and Integer performance. The Ivy Bridge FPU is tremendously more efficient than its predecessor, beating it by a 67 percent margin. Overclocked, the 3770K scales very well.

PCMark7 Computation score

The PC Mark 7 benchmark suite tests all PC subsystems, but we’re actually interested in the Computation score, here. Ivy Bridge and Sandy Bridge are almost 1-for-1.

POV-Ray Biscuit

POV-Ray is a ray tracing benchmark that relies on CPU muscle to render its target image.

SuperPI score

Purely mathematical in nature, Super PI maxes out single-threaded performance to calculate PI, in this case to the 2 millionth place.

SANDRA 2010 AES 256 bandwidth

The 3770K features a new encryption engine that allows it to squeeze a lot more data down the pipe.

SANDRA 2010 Arithmetic score

WinRAR Compression 320MB time

WinRAR Compression shows the minor edge the 3770K offers over the Core i7 2700K. A bit meh, if you ask us.

Now onto some strictly graphics-oriented benchmarks.

The HD 4000 end of business warrants its own analysis. With its 16 Execution Units and CPU-shared LLC (Last Level Cache) the HD 4000 is now spelling out some doom and gloom for the low-end discrete graphics business.

3DMark Vantage score

The inevitable 3DMark Vantage benchmark shows off DirectX 10 performance for the HD 4000 graphics. Granted it’s nothing to write home about, but it seems Intel is finally getting somewhere with its graphics processors.

3DMark 11 Performance score

3DMark 11 performance is nothing to sneeze at, considering that DirectX 11 support is brand new to the Intel lineup. We did get some artifacts in some scenes, but we believe this to be a driver issue, more than the hardware getting uncomfortable with the benchmark.

Dirt 3

We put Dirt 3 at max settings and Intel’s processor graphics survived the ordeal. If you scale down AA, you can game quite well on Intel’s new toy.

Metro 2033

We threw Metro 2033 at it as a crash test. The Metro 2033 – Frontline benchmark, running in DX 11 mode with Very High details, was like a slide-fest at times, but, again, scale back the details and image quality just a little bit and you’ll find something playable.

ComputeMark score

Considering Intel’s HD 4000 is now a OpenCL/DirectCompute capable part, we ran ComputeMark on it. The HD4000 part scored a quarter of the discrete competition.

TechARP x264 Benchmark

Finally, our media encoding test is where Intel’s HD graphics part stretches its legs. The HD 4000 graphics with its new media encoding engine chews away at frames almost as well as a discrete part.



Overall the Ivy Bridge core offers some meager performance gains over Sandy Bridge, good power savings and some great potential if you like to overclock your CPUs. The Core i7 3770K’s direct competition hails not from AMD (it hasn’t for a while now) but from its direct predecessor, the Core i7 2700K.

Over the next weeks you’ll also see that Ivy Bridge brought with it a bevy of new hardware releases, from motherboards to RAM to SSDs, as one way or another you do get quite unique advantages if you buy hardware that has been optimised for Intel hardware. The optimisations, however, revolve mostly around the motherboard and its chipset rather than the CPU, so if you see Z77 bundles with Core i7 2600K processors at a good price, you might want to consider the deal. As much as HD 4000 graphics are an improvement over their Sandy Bridge predecessor, many will keep on asking why bother with processor graphics in the mid-to-high end of things, considering most discrete GPUs will simply annihilate it. Ivy Bridge does bring DX 11 compute capabilities which we can only expect Intel will leverage down the line. Our media encoding results with the HD 4000 were close to the results we had with a discrete (GTX 460 1GB) GPU, which is nothing short of amazing. Gaming, while it might not be its forte, is definitely on the menu. Add to that the fact that you can combine the processor graphics with a discrete part, it’s up to Intel to bring to the fore some additional features.

Sandy Bridge was, admittedly, a hard act to follow, but Ivy Bridge is more than a speed bump with minor architectural improvements. It’s an important shift in design and manufacturing for Intel. In its own right, Ivy Bridge is a formidable opponent even for some higher-end Extreme Edition CPUs. It happens to also have a great deal of potential for forthcoming software and driver updates, like OpenCL/DirectCompute support. “Potential” is the operative word here, and it might not shake you to your core (no pun intended) and make you rush out to buy it.

If you can do without the power savings, overclocking tweaks and processor graphics, you might be better off picking up a Core i7 2600K/2700K on sale, but if you were about to buy one, this supersedes – dollar for dollar – what the 2700K had on offer, on just about every level. If you already own a Core i7 2600K or 2700K, you needn’t go digging in your pocket for the upgrade money just yet.

Portugal considers 'Terabyte Tax'

In what legislators are calling an attempt to “bring old legislation into the 21st century”, the Portuguese parliament is considering taxation on storage devices, in an attempt to protect copyright holders.

According to one local media outlet, Exame Informatica, the ‘minor’ legislative update proposed by the Portuguese Socialist Party (currently in the opposition) in Portugal, would have consumers forking out for a new tax on storage devices, all in the name of copyright protection – yet all but killing off HDD sales in the country.

The proposal would have consumers paying an extra 0.2 per gigabyte in tax, almost 21 extra per terabyte of data on hard drives. Devices with storage capacities in excess of 1TB would pay an aggravated tax of 2.5 cents per GB. That means a 2TB device will in fact pile on €51.2 in taxes alone (2.5 cents times 2048GB). External drives or “multimedia drives” as the proposed bill calls them, in capacities greater than 1TB, can be taxed to the tune of 5 cents per gigabyte, so in theory, a 2TB drive would cost an additional 103.2 per unit (5 cents times 2048GB). This would be enough to singlehandedly stall PC and component sales. Let’s not even consider the ongoing effects of the flooding in Thailand. We won’t even attempt a parody at formatted capacity vs. raw capacity.

Ironically, under the original format of the bill, hard-drives under the capacity of 150GB are exempt of this tax. Of course, the odds of finding anything on sale below 160GB is unlikely these days, unless it’s an SSD, a sort of grey area for this bill.

USB pens and memory cards will be taxed at 6 cents to the gigabyte, while internal storage on mobile phones and other similar storage devices will be charged 50 cents to the gigabyte. Yes, your 64GB iPhone would become 32  more expensive.

Copy devices would also be affected by this legislation: photocopiers and multi function printers would also be taxed according to the number of pages copied per minute, with a 70 ppm MFP being charged up to 227  more per device.

In Portugal, storage devices like DVDs and CDs pay a 3 percent fixed surtax, besides VAT, as a sort of penalty for being copyright violation enablers.

A Socialist Party parliamentarian was quoted as having said that home users would not feel the pinch as the tax was aimed at professionals who use larger capacity drives.

This is not an isolated case of legislative numbnuttery. It now seems to run rampant in the current Portuguese legislature, what with new taxation being created left right and centre, in an attempt to stave off another ‘Greece’.

One recent and particularly obvious money grab was the creation of electronic tolls on motorways leading into neighbouring Spain, which not only peeved the locals when going about their daily business, but also annoyed Spaniards who were not made fully aware of the implications, and were unable to pay in any fashion other than standing in very long queues for hours on end.

Samsung debuts UHS-1 for MicroSD

Samsung is debuting a MicroSD class of cards rated for speed grades of UHS-1 , for what it is calling “LTE smartphones and other advanced mobile applications”

Passed on from SDHC and SDXC cards, the Ultra High Speed – 1 (UHS-1) speed grade has previously been used for high-capacity SD cards, but had up until now failed to show up as a MicroSD.

It can reach theoretical maximum speeds to the tune of 50MB/s (sequential), which is more than enough for extremely high bitrate applications. UHS-2 is still in the works, but is expected to serve up speeds in the range of 312MB/s.

The cards will begin shipping in 16 Gigabyte capacities with “20 nanometer-class” 64 Gb density Toggle DDR 2 modules and a Samsung microcontroller for the UHS-1 interface. Internal Samsung testing has let the world know these new memory cards will zoom along at the speed of 80 MB/s reads and 21 MB/s writes, generally four times the read speed and twice the write speed of Class 10 MicroSD cards.

“MicroSD cards with a UHS-1 interface offer users an extremely high level of performance on their LTE smartphones and for other advanced mobile applications,” said Wanhoon Hong, executive vice president, memory sales & marketing, Samsung Electronics. “This allows consumers to enjoy high-quality images and video playback directly from the memory cards, which fully support the advanced performance features of diverse digital gadgets,” 

MicroSD is the favoured Flash memory format of mobile devices such as smartphones and tablets – well, if you aren’t Apple, they are. As such, Samsung is pitching these as a requirement for the high-bandwidth capacity of LTE data transfers. UHS-1 has been developed not for LTE, as Samsung implies, but for high-bitrate tasks such as recording HD video to flash storage, doing away with digital tapes and other bulky, mechanical storage devices. LTE is just the lowest common denominator right now for its marketing department and the easiest way to drive the message home.

The new MicroSD started full-on production last month at the company’s advanced NAND fabrication plant in Hwaseong, South Korea, and higher density (capacity) MicroSD cards have been promised by the memory giant.

Pricing and availability are unavailable yet.

Nvidia's mobile Geforce 600M binning gets messy

These last few Nvidia GPU generations one has become accustomed to having Nvidia rejig the mobile GPU chips from one generation to another, rebranding its old parts as low-end in the next generation, and marketing the shiny new stuff as the high-end kit, given the difference in performance.

With Kepler, however, things seem to be a bit upside down. As the new parts are far more energy-efficient than their predecessors, Nvidia seems to have thought it OK to stick Kepler in the middle range and leave the high-end 600M parts for Fermi. That’s right. The high-end Geforce 675M and 670M are based on Fermi, while the mid-range are Keplers. The Geforce GT 640M LE may or may not be a Kepler part, so Nvidia suggests you look under the hood before doing it. Fermis re-occur in the Geforce GT 635M, 630M and 620M parts, which throw another spanner in the works.

Brian Choi, Product Marketing Manager at Nvidia, responding to users’ comments, said: “Our GeForce GT lineup has new chips up and down the stack, and Kepler can be found in GeForce GTX 660M, GeForce GT 650M, GeForce GT 640M and most GeForce GT 640M/LE.  The performance range is guaranteed within a branding range but if you’re interested in the particular architecture, we encourage you to check it out before buying.”

While this doesn’t sound too bad, it’s the benchmarketing you can find on this page – trying to emphasise the superiority of the Geforce 600M series – that raises some eyebrows. It’s true that Nvidia has not been claiming wildly superior performance with its Kepler design, then again, review sites have done that job for it.

The company has simply stated the cards to be more energy efficient while outperforming their GF1xx ancestry, however we can’t help but question why does a GTX 660M (based on Kepler, according to Nvidia) with 384 CUDA cores, pushing pixels at the rate of 835MHz get hammered by a GTX 670M running 336 CUDA cores at 598MHz?

You may consider memory bandwidth to be the issue here as the GTX 660M carries a 128-bit interface and the GTX 670M a 192-bit one, but the Kepler units carry just 16 Raster Output Pipelines, aka ROPs. That being the case, Kepler is seriously underpowered in its mobile version. Unhindered by this, a GTX 660M should, clock for clock, CUDA core for CUDA core, wipe the floor with the 675M.

You can see the differences below.

We had already raised this concern when we discussed the Acer Timeline M3 Ultrabook powered by the Kepler-based GT 640M, earlier this month, as the Kepler-based card was only mildly superior to its ancestor.

Another point that sends alarm bells ringing is that throughout the entire benchmarketing exercise, Nvidia does not mention power consumption a single time – something vital to notebooks – while claiming :“The GeForce 600M series represents a huge step forward, continuing to optimize power usage, heat output, and game performance, making ultrabooks with dedicated graphics a reality.” This is only partly true if 40nm-based GF1xx parts continue to be present.

In the end, what Nvidia promises is performance within a given price range, and that’s what it is delivering. If you specifically want Kepler, you should definitely be looking under the hood and not jump at the opportunity to buy the first 600M-based notebook that shows up.

Micron's ReaSSD plays it safe, too safe

Legit Reviews has the Micron RealSSD P400e 200GB Enterprise SATA III SSD on the test bench. SSDs have become more and more reliable and durable, but in this case it costs you 52GB out of 256GB the drive carries in total. It’s a steep price to pay for provisioning, we think, but you can never cut corners on data security, now can you?

Tom’s Hardware has updated its regular System Builder Marathon feature, for March 2012. It now includes all the updated components for the $650 “budget” and $1250 “enthusiast” categories, with the inclusion of AMD’s Radeon HD 7970 and some affordable SSD boot drives.

OC Workbench has a go at Kingston’s HyperX 4x2GB DDR3-2400 CL11 DIMMs. These carry dual XMP profiles for that kick in the backside, but are programmed for JEDEC’s DDR3-1333 standard. Tested on an X79 motherboard (Biostar TPower X79) to take advantage of the quad-channel insanity, has a review of Mad Catz’ Cyborg MMO 7 gaming mouse. Of course it targets MMO players, and with a lot of customisation going on through the software side of things, you can really turn this mouse into your best in-game friend.

Hardware Canucks has a Gigabyte GA-X79-UD5 motherboard for Sandy Bridge-E processors. It’s reasonably priced, says HC, at $300 and it does a decent job at overclocking. In fact, the motherboard is all ‘decent enough’, without a particular outstanding feature.

Xbit Labs has an MSI Big Bang Xpower II motherboard in the lab. Big Bang motherboards pack a ton of features, and in this case it takes up quite a lot of real-estate. It targets the extreme overclocker and does so quite well, but the size of it all makes it a really difficult item to pick up without starting a whole box over.

iPad 2012, as it’ll surely get labelled by Apple Care, gets the Techspot eye and better yet a direct comparison to the iPad 2. The Retina Display and overall performance is impressive, says Shawn, but those who were expecting a top to bottom reinventing of the device will be disappointed.

Hardware Secrets takes a look at 3R System’s AK6-600M power supply. 3R is a Korean speciality case manufacturer that has moved into power supplies. The AK6 is a modular power supply with a peak wattage of 700W. Despite the cheap price it does carry some decent features.

Symantec divorces Huawei for fear of US blowback

Due to one of those rare moments where doing business with China is detrimental to your bottom line, Symantec has exited its joint-venture with Huawei, the Chinese networking and telecommunications giant, for fears of being shelved in the security technology race in the US.

According to the New York Times’ sources, Symantec backed out of the joint venture due to concerns that it would impact its relationship with the US military and homeland security. Symantec and Huawei, who formed the Huawei-Symantec joint venture back in 2008, had sat down and come to a buyout agreement of $530 million, leaving the company solely in Huawei’s hands.

‘Privately-owned’ Huawei, which critics suggest in the People’s Republic of China is  a euphemism for ex-military higher-ups and central committee politicians with enough clout to go into business for themselves, may be seen as a security risk for networking and security interests such as Symantec while doing business in the US.

The reason for this is that, in a rather arms-wide-open way, the Department of Homeland Security (DHS) has been pushing for greater public-private information sharing through a programme called the Joint Cybersecurity Services Pilot (JCSP), on the off chance it gets more than it has to dole out. Concerns that Symantec could be removed from the JCSP information sharing list due to its narrow relationship with Huawei has been singled out as the reason for the breakup.

Naturally, having a military contractor like Huawei-Symantec build technology that allows foreign military and intelligence agencies to tap into your own intelligence network does sound like a major no-no.

Huawei-Symantec, as a China-based manufacturer of networking and telecoms equipment, operates under the requirement that their technology be accessed by both the government and military forces (aka the People’s Liberation Army). As more and more technology is sourced from China, fears that these will make their way into the western world security and telecommunications infrastructure has raised concerns with several intelligence outfits including the Australian Security Intelligence Organisation and, more recently, the DHS in the USA.

According to the same report, Huawei had already given up on a bid for 3Com, another network and security equipment contractor for the US military, after a US government panel took issue with connections to the Chinese government.

Lucid enables high-end gaming on low-end systems

The inventors of the groundbreaking, albeit deceased, mixed-brand multiGPU chip, Hydra Engine, have decided to break the mould once again and enable high-end gaming on low-end graphics cards.

Lucid Logix has over time become a 100 percent software-driven company, releasing several new software apps specifically targeting the optimisation of Intel GPU performance. Virtu technology, for one, is implemented by some motherboard manufacturers and aimed at combining the best features of Intel’s Sandy Bridge GPU with discrete GPUs.

Now the company has launched DynamiX. With this utility the company claims it can get some serious gaming going on, on your Intel HD 2000-based graphics core. This is actually aimed at enabling gaming on mainstream notebooks without having to resort to discrete graphics from Nvidia or AMD. Dynamix is a software utility that gives a computer with Intel HD 2000-class graphics the ability to adjust the in-game graphics settings on the fly by setting a framerate threshold. If the frames fall, the software kicks in and scales back in-game detail, smoothing out your framerate immediately.

“With DynamiX, a single embedded GPU is all you will need to enjoy your favorite high-performance titles on most new notebooks without reducing display resolution or minimizing game performance settings,” said Offir Remez, Lucid co-founder and president.

As there is no such thing as a free lunch, the software has been cut-down for demo purposes and will only work with Bethesda Studios’ high-fantasy romp The Elder Scrolls V: Skyrim. Lucid will later charge an as-of-yet undetermined fee for the full software but pricing, we believe, will depend on consumer reactions and the supported game list, which, according to the company, will grow over time. Remez added, “We are offering this free trial beta version as a proof-of-concept, while working to provide DynamiX for more games. Try it and tell us what you think!”

We think Bethesda and Intel couldn’t be happier, right now.

You can get the details and software – free of charge for a limited time – so snap to it. Head over here.

Nvidia GTX 680 retakes performance crown, barely

The Geforce GTX 680 has finally broken cover. With the NDAs going up in smoke today, the now unshackled reviewers have posted their take on Nvidia’s newest and brightest Geforce graphics chippery, and it’s looking mildly good for the green one.

If you were expecting something of a pure revolution in graphics performance, Nvidia might disappoint. It hasn’t disappointed, however, when it comes to reinventing the wheel. It seems Nvidia got the right idea then and there.

According to the official communiqué, Nvidia has gifted the new 3.54 billion transistor GTX 680 with 1536 CUDA cores and dialled up the core clock all the way to 1006MHz. Memory is configured as four separate 256-bit controllers managing 512MB of 6GHz GDDR5 each. It’s comfortably small at 294mm2 and much, much smaller than the GTX 580’s huge 520mm2. It’s good to see Nvidia didn’t reach for the moon here.

Performance-wise it is outputting the claimed 10 percent – 15 percent superiority over AMD’s HD 7970 chip, but you can always argue that AMD’s counterpart is clocked lower at reference. Both are formidable overclockers, so there will be another duel with the post-launch custom-cooled cards from Nvidia’s add-in board partners. We have to confess that we were expecting a greater superiority on Nvidia’s behalf, but this takes us to the next point: Features.

First off, Kepler introduces a new hardware multimonitor support, that lets you – for example – game on three screens and keep a fourth screen outputting something completely different like your desktop, or an HD movie. This is Nvidia’s response to Eyefinity – which also begs the question: why haven’t we seen multimonitor testing going on in the reviews?

Secondly, what Nvidia seems to have nailed on the head are all matters related to power consumption. Ever since the AMD’s 5000 series cards, Nvidia has been trailing AMD on idle power, but it decided to up the ante with the Kepler architecture.

Kepler introduces, among other things, GPU Boost, a sort of Turbo Core for GPUs, which boosts core clock when needed, and saves power when it doesn’t. Previously, GPUs adopted software profiles for specific games, so as soon as the EXE file was identified, GPUs would rev up their engines. Turbo Core allows it to clock GPUs on the fly, depending on the task requirements. It also seriously downclocks the card during the game when the full muscle of the chip is not required. This brings with it serious power savings.

Third, Nvidia has introduced new AA modes like FXAA and TXAA. TXAA stands for Temporal Anti-Aliasing, which has two modes, one would provide 16x MSAA with a 2X MSAA performance hit, while the other offers >16x MSAA quality with a 4x MSAAA hit.

The Geforce GTX 680 is also rated as a 195W chip and the launch price is a rather ‘affordable’ $499, something that Nvidia wants to drive into consumers’ skulls: this time you can have your cake and eat it. Pricing seems to be spot on for the reference card, not to mention you needn’t upgrade your power supply. Expect to see insanely priced Super Overclocked editions coming soon, considering the card’s overclocking potential.

AMD and Nvidia have been locked in this fierce competition for a while now. Every time a new chip is launched, it takes over where the rival left off. If anyone questioned AMD about its 6000-series’ performance, it made up for it in power efficiency and display features. If anyone questioned Nvidia about its display features and power efficiency, they shut them up with Kepler.

We’ve collected a few reviews so you can take a look for yourself.

Tom’s Hardware

Guru 3D



Hardware Heaven

Hardware Canucks


Legit Reviews


Xbit Labs




Hot Hardware

Open source community gets big push from AMD, Intel

A double serving of GPU goodness has hit the open source community  with both AMD and Intel releasing open source graphics driver code for current and upcoming graphics devices.

According to AMD, the HD 7000 series GPUs and Trinity APU (HD 7000-class graphics), developers have received an updated list of patches that will require adding to the Linux kernel in order to properly support AMD’s Southern Islands graphics. Supported will be the current Tahiti, Pitcairn and Cape Verde chips, but also Trinity’s graphics element, Aruba.

With this we hope that the graphics support will make it somewhere into one of the next Linux kernels soonish. Version 3.4 is coming up, but it’s a bit short notice.

Almost at the same time, Intel has also released code to the open source community about its ‘next big thing’, Haswell, the successor to Ivy Bridge. This will also allow Intel to integrate the GPU support into the upcoming 3.4 kernel, if it is done fast enough.

Considering Haswell will introduce a couple of features to boost graphics performance considerably, one of which might be a rumoured L4 cache, possibly dedicated eDRAM, much like the graphics parts on consoles, this helps build up the hype and support for the CPU considerably. Intel’s Open Source Technology Centre is moving forward right now, so that by the time the CPU is out, it is fully supported by Linux distributions.

Both companies seem to be taking Linux a bit more seriously lately, with several updates and launches announced in the past year that drastically improved GPU performance on the OS. With the two big names in ‘APUs’ vying for a leadership position on the desktop, every little bit counts, and Linux with its HPC computing genes, seems to be a fine place to flex your GPU-on-CPU muscle.

Intel updates desktop CPU roadmap

Intel will release its Haswell processors from March 2013 onwards, according to a roadmap dug up by

The updated roadmap featuring Intel’s upcoming 22nm desktop processors, including the elusive – and as-of-yet un-specced – Haswell processor, shows Sandy Bridge meeting its demise this coming month as Ivy Bridge is released and Sandy Bridge-E taking a longer time to fall off the roadmap.

If all goes well for Intel, just a year from now it will be releasing its 22nm Haswell processor.

Haswell, whose processor models won’t be known for a while still, will be Intel’s first 22nm and includes the “3D transistor” technology from Ivy Bridge, which piles on a few new features such as AVX2 (a superset of the AVX instructions) and Transactional Synchronisation Extensions, a memory technology that Intel hopes to leverage to improve shared memory performance. Intel will also package it differently from Sandy- and Ivy Bridge processors, it will feature a 1150 pin LGA, so socket compatibility isn’t yet assured.

Probably one of the things that stands out the most is the fact that Sandy Bridge-E will keep going for a full 18 months, without any sign of slowing down. Extreme Edition chips continue all the way up to, and including, the first half of 2013. This means Ivy Bridge-E will launch from the second-half of 2013. Until it does so Haswell will not be able to ramp up in performance, lest it cast a shadow on the lower end Extremes. Expect Ivy Bridge-E equivalents to reach the market by back-to-school 2013.

In conclusion, if Intel is to make the 12-month life cycle a regular, Ivy Bridge and then Haswell, then Broadwell might make a guest appearance in April 2014. This will also be a definite plus for integrators, because a regular release schedule means a more stable launch and support schedule for new systems.