Tag: benchmarks

Samsung advertises with pirates

Samsung has been outed as one of the top advertisers on one of the Ukraine’s largest file-sharing sites.

Big Content groups have formed an initiative called “Clear Sky” in Ukraine, which seems to be focused on naming and shaming the advertising antics of Samsung, Nokia, Canon, Carlsberg and Coca Cola.

According to Torrent Freak, Clear Sky sees funding of P2P sites by well-known international brands as a major problem.

The Ukraine has been branded by the US as one of the top piracy havens in the world, and in a bid to “counter this image” local big content groups set up Clear Sky.

The coalition’s goal is to find ways to combat online piracy. Stage one is naming and shaming international companies who advertise with pirates.

Ex.ua and FS.ua have millions of visitors per week and generate a healthy revenue stream through ads, some of which are paid by global companies.

Nearly 10 percent of all ads on the two file-sharing sites are financed by well-known international brands. Nearly half of all those ads come from Samsung.

A big chunk of Samsung’s advertising budget in the Ukraine goes to the two file-sharing sites according to the report.

We had a look at both sites this morning to look for a screenshot and found them rather short on adverts. We guess it was just a bad day. 

Samsung fudges Note 3 benchmark

Samsung has been caught out fudging the benchmarks of its Galaxy Note 3.

Ars Technica was testing the the Samsung Galaxy Note 3 and noticed that it was scoring rather well on the official benchmark tests.

The reviewers smelt a rat when they noticed that Samsung’s 2.3GHz Snapdragon 800 did really well against LG’s 2.3GHz Snapdragon 800.

What appeared to be happening was that Samsung was boosting the US Note 3’s benchmark scores with a special, high-power CPU mode that kicks in when the device runs popular benchmarking apps.

This is not the first time that Samsung has been caught out. In fact the same trick was tried with the international Galaxy S 4’s GPU.

Now it seems that Samsung is playing the same game in the US.

After a bit of research, Ars worked out how to disable this special CPU mode, so for the first time we can see just how much Samsung’s benchmark optimisations affect benchmark scores.

Normally while the Note 3 is idling, three of the four cores shut off to conserve power and the last core drops down to a low-power 300MHz mode.

When a CPU benchmarking app is open, the Note 3 CPU locks into 2.3GHz mode and the cores never shut off.

Using Geekbench’s multicore test, the Note 3’s benchmark mode gives the device a 20 percent boost over its “natural” score. With the benchmark stripped away, the Note 3 drops down to LG G2 levels.

Looking that the java code on the phone they found an app which tweaks Geekbench, Quadrant, Antutu, Linpack, GFXBench, and even some of Samsung’s own benchmarks.

Ironcially with the benchmark booster disabled, the Note 3 still comes out faster than the G2 in this test, so it was entirely an unnecessary fudge. Samsung could still say that would have won the benchmark races, just not by as much as it did.

Having been caught out, no one will actually believe that the Samsung phone was ever faster. Own goal or what? 

Exynos 5 Octa outpaces Snapdragon 600

Samsung’s first 28nm SoC, the Exynos 5 Octa, is apparently capable of outpacing Qualcomm’s Snapdragon 600 in AnTuTu, Geekbench 2 and Quadrant, the most popular Android benchmarks.

Sam Mobile put the new chip through its paces and came up with some interesting results. It scored 27617 on Antutu and 12726 on Quadrant. The reviewers said they were blown away by the results and frankly they are pretty impressive. The Exynos version seems to be a bit faster than the Snapdragon 600 version, let alone previous generation A9-based SoCs.

It should be noted that the Snapdragon 600 version seems quite competitive in its own right. In Geekbench 2 it scores 3163, ahead of the HTC One with 2687 and LG Nexus 4 with 2040 points. There’s a simple explanation though, the Snapdragon 600 in the S4 is clocked at 1.9GHz, which is 200MHz more than in the HTC One. Firmware and OS differences should be taken into account as well.

Although Samsung’s first big.LITTLE SoC sounds like a winner, we also need to look at the big picture. Most Galaxy S4s will be shipped with Qualcomm’s LTE enabled Snapdragon 600. Samsung’s quasi octa-core doesn’t have LTE on board and it wasn’t ready in time for the S4 launch.

In other words, it might be faster, but many consumers won’t be able to get one. 

Sony and Intel share hardware information

There was a session at IDF yesterday where several fellows were, so to speak, interrogated by a vast audience of many many people.

Some might have been real developers – the first question was from a chap who asked all about sort of lines on chips, such an interesting question that the moderator kind of just gave up.

People asked so many different questions on such diverse topics that we almost wished that we had just stayed in bed all day, and had a holiday.

But the relentless questions continued until about 6PM. One of the fellows disclosed details of a relationship between Sony and Intel where hardware configurations were shared on the internet – and produced better benchmarks than Sysmark shared over the interweb.

Apparently, between them, Sony and Intel have collected over five million configurations of Sony Viaos on the interweb and that has helped these companies discover what software is on their computers.

Er, have neither Sony nor Intel ever heard of privacy?

We put our hand up at one point and wanted to ask the several fellows, apart from Dr Genevieve Bell, whether they knew how to get water out of frogs in Aussieland.

But this Sony one is interesting, isn’t it? There didn’t seem to be many spinners around, nor shills

We think Intel and Sony need to be interrogated more. 

TweakTown's Week in Review: Top Picks

Reflecting on the technological happenings over at TweakTown for this week, in usual fashion we kicked things off with our movie buff Ben checking out the Blu-ray release of Limitless.

All of us have goals in life. Some we meet, some we don’t. So what stops us from reaching the goals that we fall short on? Poor memory? Patience? Concentration? Intelligence? But what if there was a drug that could overcome all of these? That’s the premise of Limitless, one of the smartest and intriguing thrillers released for some time.

Shifting focus quickly onto something that will give the most die hard enthusiasts out there the tingles, our VGA specialist Shane Baxtor fired up the mighty ASUS MARS II for a second round of action, following our prior week’s full review of the ROG behemoth.

This time he puts the dual GTX 580 monster into overdrive by taking its clock speeds north manually and re-running our extended array of benchmarking tests. If you’re seeking the ultimate in graphics power, you should be all over this one.

Looking to build a high powered, yet compact rig for LAN/HTPC/other reasons? – The Mini ITX motherboard format has matured wonderfully and these days companies are finding ways to cram a great deal of power and features into this highly condensed format.

We looked at an offering from ASUS this week in the P8H67-I which allows for Sandy Bridge power via the H67 chipset and a full x16 (electrical) PCI-E slot for non crippled VGA grunt via your discrete card of choice.

With the highlight of the week being the almighty presence of ASUS’ space aged MARS II dual GTX 580 monster in the labs, we wanted yet another reason to put this star of the show (or is that, the cosmos in this case?) back in the limelight.

What better way to push it to the edge than with some super high resolution, triple monitor NVIDIA Surround action using three Dell U2410s @ 1920 x 1200 each (giving a combined gaming resolution of 3600 x 1920). You can see how (well) the card handled that mighty task on its own under numorous intensive benchmarking tests here.

And that about wraps up the major happenings from our neck of the woods for this week. Until next, adios folks!

Oracle makes big claims about its x86 servers

Oracle has announced an update to its x86 servers featuring Intel’s latest, the E7 family.

It’s talking about a new benchmark result for the servers, which are based on Intel’s recently announced Xeon processor E7 family. According to Oracle, the servers can run enterprise Java applications, which it says is demonstrated by the “record-breaking four processor performance” for its SPECjbb2005 on the upcoming Sun Fire X4470 M2 server.

Apparently, the new servers also have an “up-to” 39 percent increase in overall system performance, compared to their older cousins meaning customers, in theory could see more memory capacity. It also means the eco conscious can run software more effectively.

And the company is making some bold claims which could ruffle the feathers of its competitors. It says that its servers have an up to 51 percent lower three-year total cost of ownership (TCO) compared to multi-vendor alternatives from the likes of HP or IBM.

Moving away from the boasting and willy waving, the x86 line works with Oracle’s Database 11g, Oracle Fusion Middleware 11g and Oracle Applications.

Intel makes mainstream servers mission critical

We’ve seen the last of Intel’s Xeon – or at least, we’ve seen the last of it released, as the firm launched its Nehalem based Xeon 7500 processor series on Tuesday. 

Claiming the “largest performance leap” in Xeon history, Intel reckoned the new series bumped up a 3x improvement across a range of benchmarks, which apparently means data centres can now feel free to replace 20 single core servers with just one Xeon 7500 processor based system. 

The new arrival is apparently something of a chip-send in terms of energy efficiency, computing speed and a whole plethora of other features, according to Intel, which longwindedly blew its own trumpet for an hour at the launch event. 

“It is not often that you launch a product so revolutionary you think it will change the market….But the Xeon 7500 will democratise the high-end, while delivering mission critical computing to the mainstream,” gushed Intel’s Kirk Skaugen, vice president of Intel’s architecture group and general manager of its data centre group.

“Moore’s Law enables us to deliver this…We have been in the server market for 25 years, have successfully moved the Web from Sparc-based architecture to Intel-based computing,” he continued. 

“The Xeon 7500 offers 20 new reliability features, many found for the first time in X86 architecture,” said Skaugen, giving the example of a feature called machine check architecture recovery (MCA recovery), which has been in mainframes, RISC and Itanium – and is now also in the Xeon 7500. 

“In a normal machine, a multi-bit memory error caused by cosmic or alpha rays is enough to halt the system….But MCA recovery notifies the OS of a multi-bit error and allows it to keep going,” Skaugen assured us. 

Indeed, with eight core, 16-threaded  performance, Intel certainly believes it is well placed to play an even bigger role than it already currently holds.

“There is going to be an explosion in data growth. Intel is committed to getting one billion people connected to the Internet,” Skaugen proclaimed.

“We want today’s supercomputer to be under every desk in the very near future. By 2013 there will be a million CPUs in just 100 of the top supercomputers….We believe there is a category for higher core count and broader memory capacity.” 

Skaugen went on to praise the Xeon 7500’s advances in scalability, which allow new designs to range from two-socket platforms up to 256 chips per system, as well as the purported 4x increase in memory capacity (up to 1 Terabyte in 4-processor configurations) and 8x increase in memory bandwidth.  

“At the end of the day, mission critical computing is about reliability and zero down time,” he concluded.