Monday 25 March 2013

amd, intel, happy birthday, cpu, athlon, chip, pentium, cyrIntel’s infamous Pentium processor got its start 20 years ago today with the release of the Pentium 60 CPU. That chip utilized Intel’s 5-volt Socket 4, was build on an 800-nanometer process and carried 3.1 million transistors. For comparison, today’s third generation Ivy Bridge processors use a 22-nanometer process and contain 1.4 billion transistors.


As ExtremeTech notes, early Pentium processors weren’t anything special. In fact, the first generation was barely faster than the 486DX chips they replaced. It wasn’t until the Pentium 3 came along that Intel really started to find traction with the platform, outpacing competing chips from AMD and Cyrix with ease.
Pentium 4, however, was regarded as a step back in the eyes of many. The architecture was designed to run at very high clock speeds (which it did) but at the expense of real-world performance. These chips also ran extremely hot, meaning they required a large heatsink and noisy cooling fan to keep under control. The misstep ultimately allowed AMD to take the performance crown with their highly successful Athlon line for several years.
Intel eventually found their way again with the Core architecture that debuted near the end of 2007. Interestingly enough, this platform was based on the same P6 architecture used in the first Pentium Pro back in 1995. The rest is pretty much history as Intel has had tremendous success with more recent platforms like Sandy Bridge and Ivy Bridge. With Haswell in sight and AMD hardly visible in the rear-view mirror, Intel appears set to carry on without much major competition for the foreseeable future.
AMD Radeon HD 7790 Review


AMD spent the better part of 2012 releasing an entire line of 28nm GPUs, starting with the Radeon HD 7970 in January and followed by over half a dozen more cards throughout the next 8 months.
Late in the year we wrapped things up with our feature “The Best Graphics Cards: Nvidia vs. AMD Current-Gen Comparison”which saw Nvidia take out the $100 - $150 price bracket with the GeForce GTX 650 Ti, while AMD claimed the $150 - $200 range with the Radeon HD 7850.
As well-thought-out as the Radeon HD 7000 series was, we kind of hoped 2012 would mark the beginning and the end for the series, much as 2011 did for the previous generation. Expecting something entirely new was not to be, as we are now three months into 2013 and we find ourselves reviewing a brand new AMD graphics card that isn’t based on a new architecture.
Rather what we have is the latest member of the Southern Islands family, designed to fill the gap between the Radeon HD 7770 and 7850.
Not the most exciting product to be released, and its performance will be a far cry from what we saw with the GeForce GTX Titan last month. That said, the new Radeon HD 7790 is likely going to be of more interest than the GTX Titan to many of you for the simple reason that it is affordable and should be pretty good value as well.
The Radeon HD 7790 will be available in volume beginning April 2nd for as little as $150, which prices it smack bang between the 7770 and 7850. Current pricing has the Radeon HD 7770 at around $110-$120, while the 7850 costs betwen $180 and $200.
Last time we checked the GeForce GTX 650 Ti represented the best value in this bracket, but it looks like AMD is trying to win us over.

Radeon HD 7790 in Detail

The Gigabyte Radeon HD 7790 we tested measured 19cm long, a typical length for a modern mid-range graphics card. Gigabyte’s own version of the GTX 650 Ti measures 23cm long, though the actual PCB is quite shorter at a mere 14.5cm long. This new Radeon GPU core runs at 1GHz, which is the highest frequency for any Radeon card, matching the 7770, 7870 and 7970 GHz Edition cards.
The HD 7790 is clocked 16% higher than the HD 7850, while its GDDR5 memory is also faster at 1500MHz (6.0GHz DDR). Still, pairing that frequency with a minuscule 128-bit memory bus gives the HD 7790 96GB/s of theoretical bandwidth, which is actually a lot less than the old HD 6790.
Gigabyte has overclocked their 7790 card from 1000MHz to a core speed of 1075MHz. However for the purpose of this review we have clocked the card back to the default AMD specification of 1GHz.
The HD 7790 comes loaded with a 1GB frame buffer, the same as previous-gen mid-range cards. We don't doubt that board partners will release 2GB versions, but because the HD 7790 isn't designed for extreme resolutions, 2GB models aren't likely to provide any performance boost.
The HD 7790's core configuration also differs from the HD 7770. This new GPU carries 896 SPUs, 56 TAUs and 16 ROPs. That's 40% more SPUs and TAUs, while the ROPs remain the same.
Gigabyte has chosen to cool the "Bonaire XT" GPU using their own custom design which employs a massive 95mm fan. Under this fan is a relatively small aluminum heatink measuring 11.5cm long, 9cm wide and at its thickest 2cm tall. While that might sound like a decent size heatsink, by graphics card standards it is actually quite small.
The HD 7790 operates at near silence because even under load it only draws 85 watts and as little as 3 watts at idle, courtesy of the ZeroCore Power technology.
To feed the card enough power, AMD has included a single 6-pin PCI Express power connector -- the same setup you'll find on the HD 7770, 7850 and GTX 650 Ti, as well as numerous other mid-range graphics cards.
Naturally, the HD 7790 supports Crossfire and so there are a pair of connectors for bridging two cards together. The only other connectors are on the I/O panel. The AMD reference version has a dual DL-DVI connector, a single HDMI 1.4a port and two Mini DisplayPort 1.2 sockets. The Gigabyte version is a little different as it employs a pair of DL-DVI connectors, a single HDMI 1.4a port and a standard DisplayPort socket.

I’ve spent a lot of time with mega datacenters (MDCs) around the world trying to understand their problems – and I really don’t care what area those problems are as long as they’re important to the datacenter. What is the #1 Real Problem for many large scale mega datacenters? It’s something you’ve probably never heard about, and probably have not even thought about. It’s called false disk failure. Some mega datacenters have crafted their own solutions – but most have not.
Why is this important, you ask? Many large datacenters today have 1 million to 4 million hard disk drives (HDDs) in active operation. In anyone’s book that’s a lot. It’s also a very interesting statistical sample size of HDDs. MDCs get great pricing on HDDs. Probably better than OEMs get, and certainly better than the $79 for buying 1 HDD at your local Fry’s store. So you would imagine if a disk fails – no one cares – they’re cheap and easy to replace. But the burden of a failed disk is much more than the raw cost of the disk:
  • Disk rebuild and/or data replicate of 2TB or 3TB drive
    • Performance overhead of a RAID rebuild makes it difficult to justify, and can take days
    • Disk capacity must be added somewhere to compensate: ~$40-$50
    • Redistribute replicated data across many servers
    • Infrastructure overhead to rebalance workloads to other distributed servers
    • Person to service disk: remove and replace
      • And then ensure the HDD data cannot be accessed – wipe it or shred it
Let’s put some scale to this problem, and you’ll begin to understand the issue.  One modest size MDC has been very generous in sharing its real numbers. (When I say modest, they are ~1/4 to 1/2 the size of many other MDCs, but they are still huge – more than 200k servers). Other MDCs I have checked with say – yep, that’s about right. And one engineer I know at an HDD manufacturer said – “wow – I expected worse than that. That’s pretty good.” To be clear – these are very good HDDs they are using, it’s just that the numbers add up.
The raw data:
RAIDed SAS HDDs
  • 300k SAS HDDs
  • 15-30 SAS failed per day
    • SAS false fail rate is about 30%~45% (10-15 per day)
    • About 1/1000 HDD annual false failure rate
Non-RAIDed (direct map) SATA drives behind HBAs
  • 1.2M SATA HDDs
  • 60-80 SATA failed disks per day
    • SATA false fail rate is about 40~55% (24-40 per day)
    • About 1/100 HDD annual false failure rate
What’s interesting is the relative failure rate of SAS drives vs. SATA. It’s about an order of magnitude worse in SATA drives than SAS. Frankly some of this is due to protocol differences. SAS allows far more error recovery capabilities, and because they also tend to be more expensive, I believe manufacturers invest in slightly higher quality electronics and components. I know the electronics we ship into SAS drives is certainly more sophisticated than SATA drives.
False fail? What? Yea, that’s an interesting topic. It turns out that about 40% of the time with SAS and about 50% of the time with SATA, the drive didn’t actually fail. It just lost its marbles for a while. When they pull the drive out and put it into a test jig, everything is just fine. And more interesting, when they put the drive back into service, it is no more statistically likely to fail again than any other drive in the datacenter. Why? No one knows. I suspect though.
I used to work on engine controllers. That’s a very paranoid business. If something goes wrong and someone crashes, you have a lawsuit on your hands. If a controller needs a recall, that’s millions of units to replace, with a multi-hundred dollar module, and hundreds of dollars in labor for each one replaced. No one is willing to take that risk. So we designed very carefully to handle soft errors in memory and registers. We incorporated ECC like servers use, background code checksums and scrubbing, and all sorts of proprietary techniques, including watchdogs and super-fast self-resets that could get operational again in less than a full revolution of the engine.  Why? – the events were statistically rare. The average controller might see 1 or 2 events in its lifetime, and a turn of the ignition would reset that state.  But the events do happen, and so do recalls and lawsuits… HDD controllers don’t have these protections, which is reasonable. It would be an inappropriate cost burden for their price point.
You remember the Toyota Prius accelerator problems? I know that controller was not protected for soft errors. And the source of the problem remained a “mystery.”  Maybe it just lost its marbles for a while? A false fail if you will. Just sayin’.
Back to HDDs. False fail is especially frustrating, because half the HDDs actually didn’t need to be replaced. All the operational costs were paid for no reason. The disk just needed a power cycle reset. (OK, that introduces all sorts of complex management by the RAID controller or application to manage that 10 second power reset cycle and application traffic created in that time – be we can handle that.)
Daily, this datacenter has to:
  • Physically replace 100 disk drives
    • Individually destroy or recycle the 100 failed drives
    • Replicate or rebuild 200-300 TBytes of data – just think about that
    • Rebalance the application load on at least 100 servers – more likely 100 clusters of servers – maybe 20,000 servers?
    • Handle the network traffic  load of ~200 TBytes of replicated data
      • That’s on the order of 50 hours of 10GBit Ethernet traffic…
And 1/2 of that is for no reason at all.
First – why not rebuild the disk if it’s RAIDed? Usually MDCs use clustered applications. A traditional RAID rebuild drives the server performance to ~50%, and for a 2TByte drive, under heavy application load (definition of a MDC) can truly take up to a week.  50% performance for a week? In a cluster that means the overall cluster is running ~50% performance.  Say 200 nodes in a cluster – that means you just lost ~100 nodes of work – or 50% of cluster performance. It’s much simpler to just take the node offline with the failed drive, and get 99.5% cluster performance, and operationally redistribute the workload across multiple nodes (because you have replicated data elsewhere). But after rebuild, the node will have to be re-synced or re-imaged. There are ways to fix all this. We’ll talk about them on another day. Or you can simply run direct mapped storage, and unmounts the failed drive.
Next – Why replicate data over the network, and why is that a big deal? For geographic redundancy (say a natural disaster at one facility) and regional locality, MDCs need multiple data copies. Often 3 copies so they can do double duty as high-availability copies, or in the case of some erasure coding, 2.2 to 2.5 copies (yea – weird math – how do you have 0.5 copy…). When you lose one copy, you are down to 2, possibly 1. You need to get back to a reliable number again. Fast. Customers are loyal because of your perfect data retention. So you need to replicate that data and re-distribute it across the datacenter on multiple servers. That’s network traffic, and possibly congestion, which affects other aspects of the operations of the datacenter. In this datacenter it’s about 50 hours of 10G Ethernet traffic every day.
To be fair, there is a new standard in SAS interfaces that will facilitate resetting a disk in-situ. And there is the start of discussion of the same around SATA – but that’s more problematic. Whatever the case, it will be a years before the ecosystem is in place to handle the problems this way.
What’s that mean to you?
Well. You can expect something like 1/100 of your drives to really fail this year. And you can expect another 1/100 of your drives to fail this year, but not actually be failed. You’ll still pay all the operational overhead of not actually having a failed drive – rebuilds, disk replacements, management interventions, scheduled downtime/maintenance time, and the OEM replacement price for that drive – what $600 or so ?… Depending on your size, that’s either a don’t care, or a big deal. There are ways to handle this, and they’re not expensive – much less than the disk carrier you already pay for to allow you to replace that drive – and it can be handled transparently – just a log entry without seeing any performance hiccups.  You just need to convince your OEM to carry the solution.
Rob Ober drives LSI into new technologies, businesses and products as an LSI fellow in Corporate Strategy. Prior to joining LSI, he was a fellow in the Office of the CTO at AMD, responsible for mobile platforms, embedded platforms and wireless strategy. He was a founding board member of OLPC ($100 laptop.org) and OpenSPARC.

Wednesday 20 March 2013


Upwards of 40 million users of EA's Origin game platform could be open to a vulnerability that allows an attacker to remotely execute malicious code. Demonstrated by ReVuln on Friday at the Black Hat security conference in Amsterdam, the process requires Origin's client to be installed on the victim's machine and it can be exploited when the user clicks a specially crafted link.
The issue stems from Origin's use of specific uniform resource identifiers (URIs) to communicate with games. When it launches a title, it sends an "origin://LaunchGame/" URI that may also contain custom command line arguments known as "CommandParams." In ReVuln's demo for instance, the platform uses "origin://LaunchGame/71503" to open Crysis 3.
Because that link can contain CommandParams, an attacker could deliver a payload targeting software on your system with a couple of simple commands. For example, ReVuln says this would invoke the Nvidia benchmark framework and then download a tainted DLL: origin://LaunchGame//?CommandParams= -openautomate \openautomate.dll.
What's more, as we understand it, Origin doesn't even have to be running -- again, just installed -- and it's possible that an attacker could exploit a system transparently, especially if the person has their system configured to handle origin:// links automatically. As such, at a minimum, folks are encouraged to make sure their browser is set to issue a prompt when handling those links.
If you're looking to clamp down a bit more than that, the researchers recommend that you disable the origin:// URI globally with a tool such as Nirsoft's URLProtocolView. This will prevent you -- and anyone else -- from running games via shortcuts with custom parameters on your system, but ReVuln says you'll still be able to play games by running them directly from Origin's client.
It's worth noting that this isn't a new problem. The same security group exposed a similar issue on Steam last year: maliciously crafted "steam://" links could be used for remote code execution. Valve plugged that hole roughly two days after ReVuln's report was released. It's unclear if or when EA will issue a fix, not least considering it's had five months to act since the Steam issue.
SimCity Performance, Benchmarked


Going down the memory lane, I can remember two computer games being responsible for getting me so interested in PCs. The original Command & Conquer was the first around 1995. Running on the venerable MS-DOS, I spent quite a bit of time playing that game at the ripe old age of 9 on our pokey HP powered i486.
Shortly after that I discovered SimCity 2000. The first SimCity title, which was released back in 1989, was before my time so I never played or laid eyes on the original. At the time SimCity 2000 was incredible, it was extremely detailed and offered what seemed like endless hours of gameplay. Some five years later SimCity 3000 was released (1999) and again much of my childhood was spent playing it.
For reasons that I cannot recall I never got into SimCity 4 (2003). I know I played it but for some reason it just didn’t speak to me like the previous two titles. Then along came SimCity Societies and at that point I thought my days of enjoying the SimCity games were over and for the better part of a decade they were.
But when Maxis announced last year that a sixth installment in the SimCity franchise was coming the hair on the back of my neck stood on end. From the announcement, it looked to be a dramatic overhaul from previous titles featuring full 3d graphics, online multiplayer gameplay, a new engine as well as several new features and gameplay changes.
One year of waiting later, like so many others I pre-ordered the game and sat waiting for it to become available for download. Unfortunately like everyone else, once the game became available and I finally managed to download it, I wasn’t actually able to play.
As you've probably heard for the past couple of weeks, the game requires an internet connection to play, meaning there is no offline mode. That in itself is extremely annoying but it’s much worse when the servers you are meant to play on cannot cope with demand and shut you out.
It took me several days of trying, as did the thousands of outraged fans. Since we planned to test SimCity I really needed to get in and work out how we were going to test the game. Thankfully by Sunday things improved and for the next three days I set about building our test environment.

Testing Methodology

Normally when we benchmark a first person shooter, finding a good portion of the game to test with is simply a matter of playing through the game until we find a section that is rather demanding. This generally requires an hour or two of gameplay and then we get to test in full. It’s a similar process when we test real-time strategy games such as StarCraft II, for example. In that instance we chose to play a 4v4 game, record the replay and use that for benchmarking.
But with SimCity things were considerably more complex and time consuming. Because the game's progress is stored on EA servers it’s not possible to just download and use someone else’s saved game of a massive city. While it is possible to load up the leaderboard within SimCity, see who has the biggest city, and check it out, we couldn't use that for testing either since it's a live city being played, thus forever changing and hardly a controlled-enough test environment.
There are a few pre-built cities, such as the one used in the tutorial “Summer Shoals” but with a population of less than 4000 it doesn’t exactly make for the most demanding test environment. Therefore we created a city that has a population of half a million sims with three more cities just like it on the map.
When testing StarCraft II some readers were upset that we tested using a large 8-player map, claiming that they only play 1v1 and therefore get better performance. That is fine, but we wanted to show what it took to play the game in its most demanding state so that you'd never run into performance issues.
Getting back to SimCity, it’s a slightly different situation as all the regions are the same size. Some maps have more regions than others, but they are all 2x2 kilometers (comparable to SimCity 4's medium size). For testing we loaded one of our custom created cities (the same one each time) and increased the game speed to maximum, as this is how I always play anyway. Once that was done, we started a 60 second test using Fraps and in that time zoomed in and out multiple times while scrolling around the city.
As usual we tested at three different resolutions: 1680x1050, 1920x1200 and 2560x1600. The game was tested using two quality configurations, which we are calling maximum and medium. Normally we would test three different quality settings, but there was virtually no difference between 'max' and 'high' so we scrapped the latter.
  • HIS Radeon HD 7970 GHz (3072MB)
  • HIS Radeon HD 7970 (3072MB)
  • HIS Radeon HD 7950 Boost (3072MB)
  • HIS Radeon HD 7950 (3072MB)
  • HIS Radeon HD 7870 (2048MB)
  • HIS Radeon HD 7850 (2048MB)
  • HIS Radeon HD 7770 (1024MB)
  • HIS Radeon HD 7750 (1024MB)
  • HIS Radeon HD 6970 (2048MB)
  • HIS Radeon HD 6870 (1024MB)
  • HIS Radeon HD 6850 (1024MB)
  • HIS Radeon HD 6790 (1024MB)
  • HIS Radeon HD 6770 (1024MB)
  • HIS Radeon HD 6750 (1024MB)
  • HIS Radeon HD 5870 (2048MB)
  • Gigabyte GeForce GTX Titan (6144MB)
  • Gigabyte GeForce GTX 680 (2048MB)
  • Gigabyte GeForce GTX 670 (2048MB)
  • Gainward GeForce GTX 660 Ti (2048MB)
  • Gigabyte GeForce GTX 660 (2048MB)
  • Gigabyte GeForce GTX 650 Ti (2048MB)
  • Gigabyte GeForce GTX 580 (1536MB)
  • Gigabyte GeForce GTX 560 Ti (1024MB)
  • Gigabyte GeForce GTX 560 (1024MB)
  • Gigabyte GeForce GTX 550 Ti (1024MB)
  • Gigabyte GeForce GTX 480 (1536MB)
  • Gigabyte GeForce GTX 460 (1024MB)
  • Intel Core i7-3960X Extreme Edition (3.30GHz)
  • x4 4GB G.Skill DDR3-1600 (CAS 8-8-8-20)
  • Gigabyte G1.Assassin2 (Intel X79)
  • OCZ ZX Series 1250w
  • Crucial m4 512GB (SATA 6Gb/s)
  • Microsoft Windows 7 SP1 64-bit
  • Nvidia Forceware 314.14

























If you’re considering upgrading to a Haswell CPU or building an entirely new system built around the chip but have been holding out to see what performance is like compared to existing processors, today is your lucky day. That’s because the first round of benchmarks from Haswell have hit the web courtesy of Tom’s Hardware.
The publication was able to get their hands on a Core i7-4770K which will replace the i7-3770K at the top of the chip maker’s food chain (excluding Sandy Bridge-E). The chip retains the same base / Turbo clock speeds, core count (4/8) and 8MB of L3 cache as the Ivy Bridge counterpart. The only exception is the GPU clock which has been bumped up by 100MHz.
Despite the fact that the publication’s test platform was running with 17 percent less memory bandwidth, Haswell was generally able to outpace similar Ivy Bridge chips by seven to 13 percent. These are pretty respectable gains considering clock speeds haven’t increase. In other tests, like Sandra’s Multimedia benchmark, integer performance was nearly double what Ivy Bridge was capable of.
Onboard graphics also gained an improvement over Intel’s previous best. The site recorded frame rates that were on average 12-52 percent higher depending on the resolution and the game. Unfortunately the site falls short of testing overclocking capabilities or recording power consumption.
Intel’s next generation processor is still a few months out but it’s nice to get an idea of what sort of performance Haswell will carry with it when it does arrive. We fully expect final production silicon to perform even better than what Tom’s Hardware recorded with this pre-production sample.

Wednesday 13 March 2013


Although this year's Tomb Raider reboot made our latest list of most anticipated PC games, I must admit that it was one of the games I was least looking forward to from a performance perspective. Previous titles in the franchise have received mixed to positive reviews, but gameplay aside, their visuals weren't exactly mind-blowing so we've never bothered doing a performance review on one -- until now, anyway.
As with the last few entries, Crystal Dynamics developed the new Tomb Raider 88 using the Crystal Engine -- albeit a heavily modified version. Being a multiplatform release, we were naturally worried about the game being geared toward consoles with PC being an afterthought, which has become increasingly common (Dead Space 3comes to mind as a recent example) and generally results in lackluster graphics.
Those concerns were at least partially alleviated when we learned that the PC port was being worked on by Nixxes Software BV, the same folks who handled the PC versions of Hitman: Absolution and Deus Ex: Human Revolution, both of which were great examples of what we expect from decent ports in terms of graphical quality and customization. Hitman in particular really stressed our higher-end hardware.
Tomb Raider benchmarks
We were also relieved to learn that Tomb Raider supports DirectX 11, which brings access to rendering technologies such as depth of field, high definition ambient occlusion, hardware tessellation, super-sample anti-aliasing and contact-hardening shadows. Additionally, compared to the diluted console versions, the PC build offers better textures as well as AMD's TressFX real-time hair physics system.
The result should be a spectacular looking game that pushes the limits of today's enthusiast hardware -- key word being "should," of course -- so let's move on and see what the Tomb Raider reboot is made of.

Testing Methodology

We'll be testing 27 DirectX 11 graphics card configurations from AMD and Nvidia covering a wide range of prices from the affordable to the ultra-expensive. The latest drivers will be used, and every card will be paired with an Intel Core i7-3960X to remove CPU bottlenecks that could influence high-end GPU scores.
We're using Fraps to measure frame rates during 90 seconds of gameplay footage from Tomb Raider’s first level, the checkpoint is called "Stun." The test begins with Lara running to escape from a cave system.
Tomb Raider benchmarks
Our Fraps test ends just before Lara exits the cave, which is ironically where the built-in benchmark begins. We decided to test a custom section of the game rather than the stock benchmark because this is how we will test Tomb Raider in the future when reviewing new graphics cards. Using Fraps also allows us to record frame latency performance, though for this particular article we didn't include those.
Frame timings weren't included for two reasons: it's not easy to display all that data when testing 27 different GPUs, and we feel Nvidia needs more time to improve their drivers. We'll include frame time performance for Tomb Raider in our next GPU review.
We'll test Tomb Raider at three common desktop display resolutions: 1680x1050, 1920x1200 and 2560x1600 using DX11. We are also testing using the three top quality presets that includes Ultimate, Ultra and High. No changes will be made to the presets.
  • HIS Radeon HD 7970 GHz (3072MB)
  • HIS Radeon HD 7970 (3072MB)
  • HIS Radeon HD 7950 Boost (3072MB)
  • HIS Radeon HD 7950 (3072MB)
  • HIS Radeon HD 7870 (2048MB)
  • HIS Radeon HD 7850 (2048MB)
  • HIS Radeon HD 7770 (1024MB)
  • HIS Radeon HD 7750 (1024MB)
  • HIS Radeon HD 6970 (2048MB)
  • HIS Radeon HD 6870 (1024MB)
  • HIS Radeon HD 6850 (1024MB)
  • HIS Radeon HD 6790 (1024MB)
  • HIS Radeon HD 6770 (1024MB)
  • HIS Radeon HD 6750 (1024MB)
  • HIS Radeon HD 5870 (2048MB)
  • Gigabyte GeForce GTX Titan (6144MB)
  • Gigabyte GeForce GTX 680 (2048MB)
  • Gigabyte GeForce GTX 670 (2048MB)
  • Gainward GeForce GTX 660 Ti (2048MB)
  • Gigabyte GeForce GTX 660 (2048MB)
  • Gigabyte GeForce GTX 650 Ti (2048MB)
  • Gigabyte GeForce GTX 580 (1536MB)
  • Gigabyte GeForce GTX 560 Ti (1024MB)
  • Gigabyte GeForce GTX 560 (1024MB)
  • Gigabyte GeForce GTX 550 Ti (1024MB)
  • Gigabyte GeForce GTX 480 (1536MB)
  • Gigabyte GeForce GTX 460 (1024MB)
  • Intel Core i7-3960X Extreme Edition (3.30GHz)
  • x4 4GB G.Skill DDR3-1600 (CAS 8-8-8-20)
  • Gigabyte G1.Assassin2 (Intel X79)
  • OCZ ZX Series 1250w
  • Crucial m4 512GB (SATA 6Gb/s)
  • Microsoft Windows 7 SP1 64-bit
  • Nvidia Forceware 314.14
  • AMD Catalyst 13.2 (Beta 7)

Read More...

google, apple, android, ios, idc, tablet, sla
The dominance of iOS will be challenged later this year as Android hurdles toward becoming the most-used mobile operating system on tablets, according to IDC. The researcher has updated its forecast to reflect the surging interest in slates such as the Nexus 7 that are smaller and cheaper than Apple's iPads, with the latest data projecting shipments of 190.9 million units in 2013, up from the previously expected 172.4 million.
The group notes that half of the tablets shipped this quarter had screens smaller than eight inches. IDC analyst Jitesh Ubrani believes this suggests that consumers are realizing smaller slates are often better suited for typical usage than the larger options. Vendors are reportedly racing to meet that demand and most of them will offer Android devices, which should allow Google's platform to overtake iOS before the year is up.
Android tablets captured a healthy slice of the market in 2012 and its share is due to reach 48.8% in 2013, up from 41.5% in IDC's previous forecast. When that happens, iOS will purportedly slip from 51% of the market in 2012 to 46% in 2013. And while Android is chipping away at iOS, IDC says Windows will be nipping at both their ankles as it's expected to grow from 1% of the market last year to 7.4% in 2017. The researcher doesn't have high hopes for Windows RT, however, as its growth is estimated to remain below 3% over the next five years.
"Microsoft's decision to push two different tablet operating systems, Windows 8 and Windows RT, has yielded poor results in the market so far," said analyst Tom Mainelli. "Consumers aren't buying Windows RT's value proposition, and long term we think Microsoft and its partners would be better served by focusing their attention on improving Windows 8. Such a focus could drive better share growth in the tablet category down the road."

Thursday 7 March 2013


The video game industry is just that. An industry.
Which means that it exists in a capitalistic world. You know, a free market. A place where you’re welcome to spend your money on whatever you please… or to refrain from spending that money.
Those companies that put these products out? They’re for profit businesses. They exist to produce, market, and ship great games ultimately for one purpose. First, for money, then, for acclaim.
And when those companies are publicly traded on the stock market they’re forced to answer to their shareholders. This means that they need to make a lot of money in order to increase the value of the shareholder’s stock. Every quarter.
Adjusted for inflation, your average video game is actually cheaper than it ever has been. Never mind the ratio of the hours of joy you get from a game per dollar compared to film.
To produce a high quality game it takes tens of millions of dollars, and when you add in marketing that can get up to 100+ million. In the AAA console market you need to spend a ton of cash on television ads alone, never mind other marketing stunts, launch events, swag, and the hip marketing agency that costs a boatload in your attempts to “go viral” with something. Not only is the market more crowded than ever but your average consumer has many more entertainment options than ever before in the history of humanity. (Hell, when levels are loading in our games my wife and I read Twitter and Reddit.)
Another factor to consider is the fact that many game development studios are in places like the San Francisco bay area, where the cost of living is extraordinarily high. (Even Seattle is pretty pricey these days.) Those talented artists, programmers, designers, and producers that spent their time building the game you love? They need to eat and feed their families. (Something that the hipster/boomerang kid generation seems to forget all too often.)
I’ve seen a lot of comments online about microtransactions. They’re a dirty word lately, it seems. Gamers are upset that publishers/developers are “nickel and diming them.” They’re raging at “big and evil corporations who are clueless and trying to steal their money.”
I’m going to come right out and say it. I’m tired of EA being seen as “the bad guy.” I think it’s bullshit that EA has the “scumbag EA” memes on Reddit and that Good Guy Valve can Do No Wrong.
Don’t get me wrong – I’m a huge fan of Gabe and co. and most everything they do. (Remember, I bought that custom portal turret that took over the internet a while back and I have friends over there.) However, it blows my mind that somehow gamers don’t seem to get that Valve is a business, just like any other, and when Valve charges $100 for an engagement ring in Team Fortress 2 it’s somehow “cool” yet when EA wants to sell something similar it’s seen as “evil.” Yes, guys, I hate to break it to you, as awesome as Valve is they’re also a company that seeks to make as much money as possible.
They’re just way better at their image control.
Making money and running a business is not inherently evil. It creates jobs and growth and puts food on the table. This country was built on entrepreneurship. Yes, there are obvious issues around basic business ethics (Google “Pinto Fires”) and the need for a company to give back to its community, but that’s not what this blog is about right now.
People love to beat up on Origin, but they forget that, for a good amount of time, Steam sucked. No one took it seriously for the first while. When Gabe pitched it at GDC to my former co-workers years ago they came back with eye rolls. (Who’s laughing now? All of Valve.)  It took Valve years to bang their service into the stellar shape that it is in these days. Yet somehow everyone online forgets this, and they give EA crap about trying to create their own online services. Heaven forbid they see our digital roadmap for the future and try to get on board the “games as services” movement.
I remember when the rage was pointed at Epic when we allowed users to purchase weapon skins in Gears 3. I replied to an enraged fan on Twitter that “You’re more than welcome to not buy the optional cosmetic weapon skins that will make you more visible to the enemy.” And you know what? In spite of the uproar, people still bought plenty of them. (I’ve seen the numbers.)
If you don’t like EA, don’t buy their games. If you don’t like their micro-transactions, don’t spend money on them. It’s that simple. EA has many smart people working for them (Hi, Frank, JR, and Patrick!) and they wouldn’t attempt these things if they didn’t work. Turns out, they do. I assure you there are teams of analysts studying the numbers behind consumer behavior over there that are studying how you, the gamer, spends his hard earned cash.
If you’re currently raging about this on GAF, or on the IGN forums, or on Gamespot, guess what? You’re the vocal minority. Your average guy that buys just Madden and GTA every year doesn’t know, nor does he care. He has no problem throwing a few bucks more at a game because, hey, why not?
The market as I have previously stated is in such a sense of turmoil that the old business model is either evolving, growing, or dying. No one really knows. “Free to play” aka “Free to spend 4 grand on it” is here to stay, like it or not. Everyone gets a Smurfberry! Every single developer out there is trying to solve the mystery of this new model. Every console game MUST have a steady stream of DLC because, otherwise, guess what? It becomes traded in, or it’s just rented. In the console space you need to do anything to make sure that that disc stays in the tray. I used to be offended by Gamestop’s business practices but let’s be honest… they’re the next Tower Records or Sam Goody. It’s only a matter of time.
Remember, if everyone bought their games used there would be no more games. I don’t mean to knock you if you’re cash strapped – hell, when I was a kid and I had my paper route I would have bought the hell out of used games. But understand that when faced with this issue those that fund and produce those games you love have to come up with all sorts of creative ways for the business to remain viable and yes, profitable.
Saying a game has micro-transactions is a giant generalization, really, it is an open ended comment. What can you buy? Can you buy a cosmetic hat? Or can I spend a buck to go to the top of the leaderboard? Can I buy a bigger gun? What about gambling? (It’s like saying a game is open world; that could mean GTA, Assassin’s Creed, or heck, even Borderlands.) Which one do you actually mean? Do Zynga’s practices often feel sleazy? Sure. Don’t like it? Don’t play it. Don’t like pay to win? You have the freedom to opt out and not even touch the product.
If you truly love a product, you’ll throw money at it.
No one seemed too upset at Blizzard when you could buy a pet in World of Warcraft – a game that you had to buy that was charging a monthly fee. (How dare console games have steady cycles of buyable DLC!) When I was a child and the Ultimate Nintendo Fanboy I spent every time I earned from my paper route on anything Nintendo. Nintendo Cereal. Action figures. Posters. Nintendo Power. Why? Because I loved what Nintendo meant to me and I wanted them to keep bringing me more of this magic.
People like to act like we should go back to “the good ol’ days” before micro-transactions but they forget that arcades were the original change munchers. Those games were designed to make you lose so that you had to keep spending money on them. Ask any of the old Midway vets about their design techniques. The second to last boss in Mortal Kombat 2 was harder than the last boss, because when you see the last boss that’s sometimes enough for a gamer. The Pleasure Dome didn’t really exist in the original Total Carnage. Donkey Kong was hard as hell on purpose. (“Kill screen coming up!”)
I’ve been transparent with most folks I’ve worked with in my career as to why I got into this business. First, to make amazing products – because I love the medium more than any. Second, to be visible. I enjoy the notoriety that I’ve managed to stir up. And finally, yes, to make money. Money doesn’t buy happiness, but it sure is a nice lubricant when you can take that trip you’ve always wanted or feed your family or pay your bills on time.
And that brings me full circle to my main point. If you don’t like the games, or the sales techniques, don’t spend your money on them.

Gigabyte GeForce GTX Titan Review

Nvidia's Kepler architecture debuted a year ago with the GeForce GTX 680, which has sat somewhat comfortably as the market's top single-GPU graphics card, forcing AMD to reduce prices and launch a special HD 7970 GHz Edition card to help close the value gap. Despite besting its rival, many believe Nvidia had planned to make its 600 series flagship even faster by using the GK110 chip, but purposefully held back with the GK104 to save cash, since it was competitive enough performance-wise.
That isn't to say people were necessarily disappointed in the GTX 680. The 28nm part packs 3540 million transistors into a smallish 294mm2 die and delivers 18.74 Gigaflops per watt with a memory bandwidth of 192.2GB/s, while it tripled the GTX 580's CUDA cores and doubled its TAUs -- no small feat, to be sure. Nonetheless, we all knew the GK110 existed and we were eager to see how Nvidia brought it to the consumer market -- assuming it even decided to. Fortunately, that wait is now over.
After wearing the single-GPU performance crown for 12 months, the GTX 680 has been dethroned by the new GTX Titan. Announced on February 21, the Titan carries a GK110 GPU with a transistor count that has more than doubled from the GTX 680's 3.5 billion to a staggering 7.1 billion. The part has roughly 25% to 50% more resources at its disposal than Nvidia's previous flagship, including 2688 stream processors (up 75%), 224 texture units (also up 75%) and 48 raster operations (a healthy 50% boost).
In case you're curious, it's worth noting that there's "only" estimated to be a 25% to 50% performance gain because the Titan is clocked lower than the GTX 680. Given those expectations, it would be fair to assume that the Titan would be priced at roughly a 50% premium, which would be about $700. But there's nothing fair about the Titan's pricing -- and there doesn't have to be. Nvidia is marketing the card as a hyper-fast solution for extreme gamers with deep pockets, setting the MSRP at a whopping $1,000.
That puts the Titan in the dual-GPU GTX 690's territory, or about 120% more than the GTX 680. In other words, the Titan is not going to be a good value in terms of price versus performance, but Nvidia is undoubtedly aware of this and to some extent, we'll have to respect it as a niche luxury product. With that in mind, let's lift the Titan's hood and see what makes it tick before we run it through our usual gauntlet of benchmarks, which now includes frame latency measurements -- more on that in a bit.

Titan’s GK110 GPU in Detail

The GeForce Titan is a true processing powerhorse. The GK110 chip carries 14 SMX units with 2688 CUDA cores, boasting up to 4.5 Teraflops of peak compute performance.
As noted earlier, the Titan features a core configuration that consists of 2688 SPUs, 224 TAUs and 48 ROPs. The card's memory subsystem consists of six 64-bit memory controllers (384-bit) with 6GB of GDDR5 memory running at 6008MHz, which works out to a peak bandwidth of 288.4GB/s -- 50% more than the GTX 680.
The Titan we have is outfitted with Samsung K4G20325FD-FC03 GDDR5 memory chips, which are rated at 1500MHz -- the same as you'll find on the reference GTX 690.
Where the Titan falls short of the GTX 680 is in its core clock speed, which is set at 836MHz versus 1006MHz. That 17% difference is made up slightly by Boost Clock, Nvidia's dynamic frequency feature, which can push the Titan as high as 876MHz.
By default, the GTX Titan includes a pair of dual-link DVI ports, a single HDMI port and one DisplayPort 1.2 connector. Support for 4K resolution monitors exists, while it is also possible to support up to four monitors screens.

Saturday 2 March 2013

Windows 8 continued its slow but steady growth in February grabbing 2.26 percent of the operating system market share, up from 1.72 percent in December and 1.09 percent in November according to Net Applications. During the month Windows 7 also gained 0.07 percentage points after losing 0.63 percentage points in January, and it’s still the most used platform by a comfortable margin with 44.55 percent of the market.
The venerable Windows XP is second with 38.99 percent, down from 39.51 percent, while Vista continued to shed users with a 5.24 percent share. That put Windows 8 in fourth place among all operating system versions, just ahead of Mac OS X 10.8, which gained 0.17 points to 2.61 percent market share.
So how has Windows 8 fared compared to Windows 7 during its initial launch months? To put it into perspective, both versions of Windows were officially released in October of their respective years, but by the end of February 2010 Windows 7 had already seized more than 9 percent of the traffic seen by Net Applications.
There are a few considerations to take into account, such as the fact that Windows 7 superseded and improved upon an operating system release that was generally seen as a commercial failure, whereas Windows 8 marks a significant paradigm shift for Microsoft that’s bound to encounter some resistance from long time Windows users despite offering heavily discounted upgrades -- which are no longer available, by the way.
Slowing PC sales might also be playing a part in Windows 8’s sluggish adoption.
Overall market share numbers for each operating system in aggregate haven’t changed much: Windows still dominates with a whopping 91.62 percent, down 0.09 points from 91.71 percent, followed by OS X which gained just as much to grab 7.17 percent and Linux holding steady at 1.21 percent.

Read More...

    Blogger news

    This blog was recently the another version Tech Times. This is its new Blog and it Still continues as the Old one. Each Blog has Same Post.

    About

    This Blog is an Idea of Malayil Vivekanandan. He thought about serving People with latest Technology News and Upadtes, so that people will be more updated with their Tech Knowledge.
-