Ok, da keiner sich die Mühe macht den Artikel zu lesen, auf den ich mich beziehe, kopiere ich hier mal einen Auszug herein.
Zitat aus einem Artikel von "the inquirer":
When Dell, HP and others announce a BIOS 'fix', the reason it is so humorous is that all they are doing is lowering the amount of thermal stress on the chips when the fan would not normally be on. When the fan is going full tilt without the 'fix', the new 'updated thermal profiles' won't make a difference. When the fans are normally off or on low, the profiles will essentially lessen the stress from a four to a three. It is just there to allow the laptop to live through the warranty period so the companies don't have to pay for the fix. After that, if the defective chips burn out, it isn't their problem. The 'fix' doesn't fix anything at all.
In the end, it comes down to Nvidia screwing up badly on package engineering and testing, then trying as best they can to bury the problem while passing the buck. It appears that every Nvidia 65nm and 55nm part with high lead bumps and/or low Tg underfill are defective, it is just a question of how defective they are, and when they will die.
As far as we are able to tell, contrary to Nvidia's vague statements blaming suppliers, there are no materials defects at work here. Every material they used lived up to the claimed specs, and every material they used would have done the job while kept within the advertised parameters. Nvidia's engineering failures put overdue stress on the parts, and several failures compounded to make two generations of defective parts. The suppliers and subcontractors did exactly what they were told, Nvidia just told them to do the wrong thing.
When it started talking about this, Nvidia failed crisis management 101, and the coverup shows it doesn't care about consumers, just its bottom line. NV is doing exactly the wrong thing for the wrong reasons, and the lawyers circling with class action paperwork in hand are going to eat them alive.
The last time you had such a huge batch of defective GPUs, the company that did it swore up and down – just like Nvidia – that there was no problem despite forums filled with evidence to the contrary.
A few weeks later, they turned around and admitted there was a problem, and took a $1.1 Billion charge, placating customers and fending off lawsuits. Zitat Ende
Alle von mir geäußerten Gedankengänge basieren auf den Artikeln unter:
hxxp://www.theinquirer.net/gb/inquirer/ ... -defective Wenn man das komplett liest, drängt sich einem der Gedanke auf, das die SW-mäßig ausgelesenen Temperaturen gefaked sein können, um den Hersteller und die OEMs zu schützen.
Auf diesem Hintergrund
suche ich eine Methode die tatsächlichen Werte der GPU zu messen.
Vielleicht wird jetzt deutlicher worauf ich hinaus will.
Gruß Knud
PS: Da mein englisch nicht besonders gut ist, besteht die Möglichkeit, dass ich die Zusammenhänge nicht korrekt verstanden habe, dann bitte ich um Nachsicht.