It looks like AMD has outrun NVIDIA today. Its World’s First Microsoft DirectX® 11 Graphics Processor, presented a few hours ago in Taipei, is currently the best hardware for Windows 7. Catch up, NVIDIA! However not many details of it suggested. At least enjoy the graphics:
Posts Tagged ‘Nvidia’
The summer has begun, and as usual at this time of the year big companies present the results of hard work to the public. With Microsoft’s Bing and Google Wave flooding the news, you might have overlooked the joint release of NVIDIA and Supermicro. At Computex 2009 in Taipei, Taiwan, Nvidia and Supermicro announced
“a new class of server that combines massively parallel NVIDIA® Tesla™ GPUs with multi-core CPUs in a single 1U rack-mount server.”
According to the news text, the performance will increase 12 times compared to a traditional quad-core CPU-based 1U server. The new 1-unit solution combines 2 NVIDIA Tesla 1060 GPU cards with Dual Quad/Dual-Core Intel® Xeon® processors 5500 series, so you do not have to configure your machine as in case with Nvidia S1070 featuring four Tesla GPUs. The new server is based on Nvidia CUDA™ architecture.
It should be a very powerful solution and an expensive one too. However, we do not expect password recovery to benefit much from it. As we’ve mentioned many times before, password recovery is barely cost-effective when expensive hardware is involved in the process.
Hardware acceleration of password recovery has been a hot topic for quite some time already. We were the first to adopt widely available graphic cards for this purpose and we’re proud of this. Today I’d like to share some thoughts on hardware acceleration for password recovery, its past, present, and future. I will also cover the most frequently asked questions regarding GPUs.
First of all, sad news: Intel Larrabee is delayed till 2010 (we were expecting it in Q4’2009), according to the reports. With 32 cores onboard (though this number is not confirmed yet), it looks like a very good system for password cracking. Some Larrabee development tools and resources are already available, and of course, we’re porting our code to this platform, and will share the results with you as soon as we’ll be able to (we’re under the NDA with Intel; as well as with Nvidia and AMD :)).
We wrote about Cost-effective video cards recently, but what about better ones, if the prise does not really matter? Just read Best Of The Best: High-End Graphics Card Roundup at Tom’s Hardware. Large. Expensive. Power-consuming. But really fast — so best choice if you deal with GPU acceleration.
Btw, don’t forget to get a good coller for your new card — like this one.
Tom’s Hardware has tested two mainstream NVIDIA cards (GeForce 9600 GT and GeForce 9800 GTX) on several CUDA-enabled applications. The applications were:
- CyberLink PowerDirector
- Tsunami MPEG Encoder
- Super LoiLoScope
If you are going to purchase a new computer (or make it yourself), you should definitely think about graphics — for CAD/CAM, gaming, searching for extraterrestrial intelligence at home or password cracking. Of course, thinking of budget, too. I hope you’re already aware of NVIDIA SLI which allows to use multiple video cards, but how a single dual-GPU compares to two single-GPU ones? Read GeForce GTX 295 Vs. GTX 275 SLI: When Two Are Better Than One.
Finally, nVidia’s GT300 specifications revealed! 512 cores (remember that GT200 has only 240), which means about 3 TFLOPS — can you imagine that? We’re also expecting the new generation of Tesla supercomputers based on those GPUs. GT300 also gives direct hardware access for CUDA 3.0, DirectX 11, OpenGL 3.1 and OpenCL.