bradc
Jul 27, 11:19 AM
"...Core 2 Duo chips need less electricity, drawing just 65 watts compared to the Pentium 4�s 95 watts and Pentium D�s 130 watts"
Good Lord - does anybody know what the G5 is? I'd imagine that the elaborate cooling system in the current G5 towers probably won't be needed it it's running anything like the D's...
Don't ask! Hahahaha, the G5's run hot, I'd hate to know how much they're sucking but with a 600W power supply...it's a lot;)
Good Lord - does anybody know what the G5 is? I'd imagine that the elaborate cooling system in the current G5 towers probably won't be needed it it's running anything like the D's...
Don't ask! Hahahaha, the G5's run hot, I'd hate to know how much they're sucking but with a 600W power supply...it's a lot;)
propynyl
Apr 11, 12:57 PM
My 3Gs contract ends in June and Apple will be pushing it's luck for me to go half a year without me being tempted to jump platforms instead of waiting for the iPhone 5.
I feel the same way. I mean, I'm NOT jumping ship, but I'm also not settling for the iPhone 4. I'm stuck waiting for the iPhone 5, hoping my 3Gs doesn't fall apart like it is starting to do nthisntotally sucks!! I might have to get a freaking gophone to tide me over if my 3GS falls apart.
I feel the same way. I mean, I'm NOT jumping ship, but I'm also not settling for the iPhone 4. I'm stuck waiting for the iPhone 5, hoping my 3Gs doesn't fall apart like it is starting to do nthisntotally sucks!! I might have to get a freaking gophone to tide me over if my 3GS falls apart.
Doctor Q
Jul 14, 02:54 PM
... and the other one HD-DVD! :eek: ;) :DWhy all the smilies? Having the ability to install other-format optical drives is what AppleInsider is talking about.
ergle2
Sep 15, 12:50 PM
More pedantic details for those who are interested... :)
NT actually started as OS/2 3.0. Its lead architect was OS guru Dave Cutler, who is famous for architecting VMS for DEC, and naturally its design influenced NT. And the N-10 (Where "NT" comes from, "N" "T"en) Intel RISC processor was never intended to be a mainstream product; Dave Cutler insisted on the development team NOT using an X86 processor to make sure they would have no excuse to fall back on legacy code or thought. In fact, the N-10 build that was the default work environment for the team was never intended to leave the Microsoft campus. NT over its life has run on X86, DEC Alpha, MIPS, PowerPC, Itanium, and x64.
IBM and Microsoft worked together on OS/2 1.0 from 1985-1989. Much maligned, it did suck because it was targeted for the 286 not the 386, but it did break new ground -- preemptive multitasking and an advanced GUI (Presentation Manager). By 1989 they wanted to move on to something that would take advantage of the 386's 32-bit architecture, flat memory model, and virtual machine support. Simultaneously they started OS/2 2.0 (extend the current 16-bit code to a 16-32-bit hybrid) and OS/2 3.0 (a ground up, platform independent version). When Windows 3.0 took off in 1990, Microsoft had second thoughts and eventually broke with IBM. OS/2 3.0 became Windows NT -- in the first days of the split, NT still had OS/2 Presentation Manager APIs for it's GUI. They ripped it out and created Win32 APIs. That's also why to this day NT/2K/XP supported OS/2 command line applications, and there was also a little known GUI pack that would support OS/2 1.x GUI applications.
All very true, but beyond that -- if you've ever looked closely VMS and at NT, you'll notice, it's a lot more than just "influenced". The core design was pretty much identical -- the way I/O worked, its interrupt handling, the scheduler, and so on -- they're all practically carbon copies. Some of the names changed, but how things work under the hood hadn't. Since then it's evolved, of course, but you'd expect that.
Quite amusing, really... how a heavyweight enterprise-class OS of the 80's became the desktop of the 00's :)
Those that were around in the dim and distant will recall that VMS and Unix were two of the main competitors in many marketplaces in the 80's and early 90's... and today we have OS X, Linux, FreeBSD, Solaris, etc. vs XP, W2K3 Server and (soon) Vista -- kind of ironic, dontcha think? :)
Of course, there's a lot still running VMS to this very day. I don't think HP wants them to tho' -- they just sent all the support to India, apparently, to a team with relatively little experience...
NT actually started as OS/2 3.0. Its lead architect was OS guru Dave Cutler, who is famous for architecting VMS for DEC, and naturally its design influenced NT. And the N-10 (Where "NT" comes from, "N" "T"en) Intel RISC processor was never intended to be a mainstream product; Dave Cutler insisted on the development team NOT using an X86 processor to make sure they would have no excuse to fall back on legacy code or thought. In fact, the N-10 build that was the default work environment for the team was never intended to leave the Microsoft campus. NT over its life has run on X86, DEC Alpha, MIPS, PowerPC, Itanium, and x64.
IBM and Microsoft worked together on OS/2 1.0 from 1985-1989. Much maligned, it did suck because it was targeted for the 286 not the 386, but it did break new ground -- preemptive multitasking and an advanced GUI (Presentation Manager). By 1989 they wanted to move on to something that would take advantage of the 386's 32-bit architecture, flat memory model, and virtual machine support. Simultaneously they started OS/2 2.0 (extend the current 16-bit code to a 16-32-bit hybrid) and OS/2 3.0 (a ground up, platform independent version). When Windows 3.0 took off in 1990, Microsoft had second thoughts and eventually broke with IBM. OS/2 3.0 became Windows NT -- in the first days of the split, NT still had OS/2 Presentation Manager APIs for it's GUI. They ripped it out and created Win32 APIs. That's also why to this day NT/2K/XP supported OS/2 command line applications, and there was also a little known GUI pack that would support OS/2 1.x GUI applications.
All very true, but beyond that -- if you've ever looked closely VMS and at NT, you'll notice, it's a lot more than just "influenced". The core design was pretty much identical -- the way I/O worked, its interrupt handling, the scheduler, and so on -- they're all practically carbon copies. Some of the names changed, but how things work under the hood hadn't. Since then it's evolved, of course, but you'd expect that.
Quite amusing, really... how a heavyweight enterprise-class OS of the 80's became the desktop of the 00's :)
Those that were around in the dim and distant will recall that VMS and Unix were two of the main competitors in many marketplaces in the 80's and early 90's... and today we have OS X, Linux, FreeBSD, Solaris, etc. vs XP, W2K3 Server and (soon) Vista -- kind of ironic, dontcha think? :)
Of course, there's a lot still running VMS to this very day. I don't think HP wants them to tho' -- they just sent all the support to India, apparently, to a team with relatively little experience...
�algiris
Apr 6, 03:30 PM
But hey, haven't you heard, Honeycomb is a real tablet OS. (Whatever the heck that means.)
Google must have used that line in a PowerPoint somewhere because I see it regurgitated verbatim on every single iPad vs. Honeycomb thread.
The Google brainwashing continues. ;)
Real tablet OS, Full internet, True multitasking - the list's expanding fast :D
Google must have used that line in a PowerPoint somewhere because I see it regurgitated verbatim on every single iPad vs. Honeycomb thread.
The Google brainwashing continues. ;)
Real tablet OS, Full internet, True multitasking - the list's expanding fast :D
goobot
Mar 31, 07:18 PM
i heard that ios 4.3 is more open than the current android os :p
Reach9
Apr 11, 02:35 PM
Why would you when android has at the moment passed apple on every standard out there?
Android hasn't passed Apple on every standard. Please give me an example of that.
But, Android phones are better smartphones than the iPhone, imo.
Android hasn't passed Apple on every standard. Please give me an example of that.
But, Android phones are better smartphones than the iPhone, imo.
matticus008
Nov 29, 06:13 AM
One wonders why it hasn't been used in a Court of Law.
Not really, though. There are countless ways of maneuvering around any such royalties, from framing it as an access toll to a deposit or anything in between. This added cost doesn't actually get you anywhere in litigation, most importantly because it in no way stipulates between you, the customer, and the label.
What's also interesting is that if this fee is added they have now unwittingly legimized the stolen music.
Far from it. Each tax payer contributes to fund their local DMV, and yet their services aren't free. The state collects a tax on car sales, which goes in most cases to road improvement, police departments, and the DMV (along with a truly bizarre array of other causes), but it's only part of the cost. You also pay taxes to a general fund, which is distributed to agencies and services you may never use (or even be aware of). Contributing some money cannot be construed as contributing sufficient money here.
You also pay for car insurance which protects you in the event of an accident; intentionally putting yourself in an accident is insurance fraud. There's no such thing as "music fraud" (at least in this construction), but the result is a sort of piracy insurance policy for the label. Naturally, though, the labels claim such exorbitant losses and damages from piracy that even $1 per iPod would hardly dent that figure.
If this went into effect, I would have a defense in court when I downloaded the entire Universal Label Catalog (All Their Music) off the net.
If only it worked that way...
Just to be clear, this whole idea of collecting on music players is nothing short of outrageous. But it doesn't have the legal implications or weight that have been popularized here. They CAN have their cake and eat it, too, and they know it. That's why it's important for me to ensure that these false notions don't become ingrained as part of the Internet groupthink--when you step back into the real world, you'll be equally screwed, with or without this fee.
Not really, though. There are countless ways of maneuvering around any such royalties, from framing it as an access toll to a deposit or anything in between. This added cost doesn't actually get you anywhere in litigation, most importantly because it in no way stipulates between you, the customer, and the label.
What's also interesting is that if this fee is added they have now unwittingly legimized the stolen music.
Far from it. Each tax payer contributes to fund their local DMV, and yet their services aren't free. The state collects a tax on car sales, which goes in most cases to road improvement, police departments, and the DMV (along with a truly bizarre array of other causes), but it's only part of the cost. You also pay taxes to a general fund, which is distributed to agencies and services you may never use (or even be aware of). Contributing some money cannot be construed as contributing sufficient money here.
You also pay for car insurance which protects you in the event of an accident; intentionally putting yourself in an accident is insurance fraud. There's no such thing as "music fraud" (at least in this construction), but the result is a sort of piracy insurance policy for the label. Naturally, though, the labels claim such exorbitant losses and damages from piracy that even $1 per iPod would hardly dent that figure.
If this went into effect, I would have a defense in court when I downloaded the entire Universal Label Catalog (All Their Music) off the net.
If only it worked that way...
Just to be clear, this whole idea of collecting on music players is nothing short of outrageous. But it doesn't have the legal implications or weight that have been popularized here. They CAN have their cake and eat it, too, and they know it. That's why it's important for me to ensure that these false notions don't become ingrained as part of the Internet groupthink--when you step back into the real world, you'll be equally screwed, with or without this fee.
RUAerospace
Aug 17, 11:28 AM
Lots of stuff on Anandtech about the poor memory performance on the Intel chipset.
Looks like the Xeons got killed by the G5 in Word in their tests.
Might be an interesting machine when/if the motherboard chipset/ memory performance issue is looked in to.
I think part 3 of their review will be telling, paring the machine up to XP machines in a variety of tests.
Also from the Anandtech review (the reviewers conclusion actually):
The Mac Pro is pretty much everything the PowerMac G5 should have been. It's cooler, quieter, faster, has more expansion and it gives you more for your value than the older systems ever could.
Princess Mononoke San
Princess Mononoke, Wolf Spirit
princess mononoke wolf.
Princess Mononoke Cels by
Princess Mononoke
princess mononoke wolf. movie:
Princess Mononoke
movie: Princess Mononoke,
Fanart of Princess Mononoke,
princess mononoke wolf.
Looks like the Xeons got killed by the G5 in Word in their tests.
Might be an interesting machine when/if the motherboard chipset/ memory performance issue is looked in to.
I think part 3 of their review will be telling, paring the machine up to XP machines in a variety of tests.
Also from the Anandtech review (the reviewers conclusion actually):
The Mac Pro is pretty much everything the PowerMac G5 should have been. It's cooler, quieter, faster, has more expansion and it gives you more for your value than the older systems ever could.
skunk
Apr 27, 01:29 PM
Who said I supported Bush? He's not conservative enough for me.Hell, the Pope's not conservative enough for you.
I know a lot about alcoholism and codependence because my mother is a nurse who specialized in treating alcoholics and other drug addicts and in counseling them. You don't help an alcoholic by protecting him from the consequences of his actions. The protection can help him make even bigger mistakes. I've seen that happen in many families I know of that include alcoholics. I also know about entitled welfare recipients who abuse social programs by demanding too much from social programs, by getting it, and by defrauding them. I saw the entitlement firsthand when a relative of mine was a landlord who rented houses to welfare recipients. Welfare recipients ruined a house, my relative kept the security deposit, and then the family got the Department of Social Services to put them into a house for twice the rent my relative charged. But the family still had the nerve to complain that my relative had overcharged it.Ah, how I've missed the heartwarming, anecdotal and utterly irrelevant evidence you bring to a topic.
I know a lot about alcoholism and codependence because my mother is a nurse who specialized in treating alcoholics and other drug addicts and in counseling them. You don't help an alcoholic by protecting him from the consequences of his actions. The protection can help him make even bigger mistakes. I've seen that happen in many families I know of that include alcoholics. I also know about entitled welfare recipients who abuse social programs by demanding too much from social programs, by getting it, and by defrauding them. I saw the entitlement firsthand when a relative of mine was a landlord who rented houses to welfare recipients. Welfare recipients ruined a house, my relative kept the security deposit, and then the family got the Department of Social Services to put them into a house for twice the rent my relative charged. But the family still had the nerve to complain that my relative had overcharged it.Ah, how I've missed the heartwarming, anecdotal and utterly irrelevant evidence you bring to a topic.
ssteve
Aug 16, 10:41 PM
Should we be surprised? I mean really this is good information, but it is does not really make me sit up and say "WOW". It is definitely interesting for the benchmarks. Thank you Steve for making the switch to Intel!
inkswamp
Jul 27, 02:22 PM
but is still more productive because it handles more calculations per clock cycle
I'm no processor geek. I have a basic understanding of the terminology and how things work so correct me if I'm wrong, but wasn't this one of the advantages that the PPC had over Intel chips? Does this mean Intel is moving toward shorter pipes? Are we talking more instructions per clock cycle or what? What does "calculations" mean in this context?
I'm no processor geek. I have a basic understanding of the terminology and how things work so correct me if I'm wrong, but wasn't this one of the advantages that the PPC had over Intel chips? Does this mean Intel is moving toward shorter pipes? Are we talking more instructions per clock cycle or what? What does "calculations" mean in this context?
4God
Jul 14, 11:07 PM
8 cores?! Wow, maybe one day!
8 cores, yeah you can get that in a jumbled amd setup today.
8 cores, yeah you can get that in a jumbled amd setup today.
Eraserhead
Mar 24, 02:28 AM
I supported Bush's invasion of Afghanistan.
Same here.
I think all we really needed to do in Afghanistan was to spend some real money on infrastructure.
Of course that would mean playing nice with Afghanistan's neighbours.
Same here.
I think all we really needed to do in Afghanistan was to spend some real money on infrastructure.
Of course that would mean playing nice with Afghanistan's neighbours.
Hallivand
Mar 25, 10:34 PM
Since the release of Leopard, the subsequent releases haven't had the wow factor of before.
Just what I think anyway.
Just what I think anyway.
mc68k
Dec 7, 12:47 AM
well i bought a delorean s2. hadn't seen it come up before in the used lot, and ive been checking pretty much every time. was at the bottom of the list and i had to sell a few of my cars before i could buy it. 517K! not even something i can win high HP races with, but damn cool :cool:
takao
Dec 2, 04:53 PM
I love that i won a mini in the mini-only race. I'll never touch either of my minis again.
;) that's why i haven't bothered with that race. .. just like in the lupo race where you win an entry level lupo (i already have a lupo cup version)
thank you very much for providing me with a _another_ worthless < 90kw FF hatchback
;) that's why i haven't bothered with that race. .. just like in the lupo race where you win an entry level lupo (i already have a lupo cup version)
thank you very much for providing me with a _another_ worthless < 90kw FF hatchback
ten-oak-druid
Apr 25, 02:18 PM
Because Apple is not tracking you. Apple does not get any of that data, they will never see or touch it. It is data that is stored locally on your phone out of reach from everyone except you. "Apple tracks you" would mean that the phone is sending the data 'home', but it doesn't. APPLE HAS NO IDEA WHERE THE F YOU ARE OR WERE (and they probably couldn't care less)
Prove it.
Prove it.
jamesryanbell
Mar 31, 03:16 PM
Jobs was right. AGAIN.
When he speaks, listen.
When he speaks, listen.
troop231
Mar 22, 12:56 PM
I agree.
But who in their right minds would want to own something called a Playbook? :o
Hugh Hefner of course.. :cool:
But who in their right minds would want to own something called a Playbook? :o
Hugh Hefner of course.. :cool:
joemama
Aug 12, 07:03 AM
Who says Apple has to piggy-back off of another carrier? Let's not forget the large distribution center Apple bought some time back. Maybe the delay in the phone has more to to with that.
Steve holds grudges. While I think the Rokr was more of a market test, he won't go back with Cingular. We all know if Apple is going to do anything they are going to do it right - with Steve calling the shots.
Steve holds grudges. While I think the Rokr was more of a market test, he won't go back with Cingular. We all know if Apple is going to do anything they are going to do it right - with Steve calling the shots.
dave420
Apr 25, 01:39 PM
but I really do not like the fact that the iPhone has a breadcrumbs database of my travels for the last 3 years!
This type of thing should not happen without users' knowledge... and it was. Or else this file would not be news!
I too don't like the idea of a device saving my location. On the other hand when I am using the Maps app for driving directions which sends my current location to Google, I would be naive to think that information isn't being stored somewhere.
This type of thing should not happen without users' knowledge... and it was. Or else this file would not be news!
I too don't like the idea of a device saving my location. On the other hand when I am using the Maps app for driving directions which sends my current location to Google, I would be naive to think that information isn't being stored somewhere.
Butters
Aug 6, 01:14 PM
i don't care about see-through windows. I want something that works.
see-through windows are SOOOO jaguar
see-through windows are SOOOO jaguar
amin
Aug 18, 10:28 PM
Obviously, inherently the iMac design is inferior to the Mac Pro/Powermac.
It may be obvious, but based on your earlier statement that a Conroe iMac would be "able to crunch through" apps faster than a Mac Pro, the obvious seemed worth identifying.
But I think there's a bigger reason why Apple chose to go all quad with the Mac Pro: Apple chose all quad because a duo option would have had the same performance in professional apps (again, excluding handbrake and toast which are the only two examples touted about). A single processor Woodcrest or Conroe option will have the same obtainable CPU power for 90-95% of the professional market for another 6-12 months at the very least.
So you think they put an extra processor in across the line just to be able to say they had a quad? Even the AnandTech article you used as a source showed here (http://www.anandtech.com/mac/showdoc.aspx?i=2816&p=18) that PS took advantage of quad cores in Rosetta
Here's some data regarding the Mac Pro's FSB:
*snip*
What can we take from this? Because of the use of FB-DIMMs, the Mac Pro's effective FSB is that of ~640MHz DDR2 system.
And how does it fare in memory latency?
*snip*
Your points about latency and FSB are not separate negatives as you have made them. They are redundant theoretical concerns with implications of unclear practical significance.
As for bandwidth, although the Mac Pro has a load of theoretical bandwidth, the efficiency is an abysmal 20%. In real use a DDR2 system has 72% more usable bandwidth. (source here (http://www.anandtech.com/mac/showdoc.aspx?i=2816&p=11))
I don't know bout you, but if I were a heavy user of memory intensive apps such as Photoshop, I'd be worried. Worried in the sense that a Conroe would be noticeably faster.
I am not worried. Everything anyone has come up with on this issue are taken from that same AnandTech article. Until I see more real-world testing, I will not be convinced. Also, I expect that more pro apps such as PS will be able to utilize quad cores in the near future, if they aren't already doing so. Finally, even if Conroe is faster, Woodcrest is fast enough for me ;).
Memory issues aside, Woodcrests are faster than Conroes, 2.4% on average (source here (http://www.anandtech.com/showdoc.aspx?i=2795&p=6))
I think you misread that. They were comparing Core 2 Extreme (not Woodcrest) and Conroe to see whether the increased FSB of the former would make much difference.
It may be obvious, but based on your earlier statement that a Conroe iMac would be "able to crunch through" apps faster than a Mac Pro, the obvious seemed worth identifying.
But I think there's a bigger reason why Apple chose to go all quad with the Mac Pro: Apple chose all quad because a duo option would have had the same performance in professional apps (again, excluding handbrake and toast which are the only two examples touted about). A single processor Woodcrest or Conroe option will have the same obtainable CPU power for 90-95% of the professional market for another 6-12 months at the very least.
So you think they put an extra processor in across the line just to be able to say they had a quad? Even the AnandTech article you used as a source showed here (http://www.anandtech.com/mac/showdoc.aspx?i=2816&p=18) that PS took advantage of quad cores in Rosetta
Here's some data regarding the Mac Pro's FSB:
*snip*
What can we take from this? Because of the use of FB-DIMMs, the Mac Pro's effective FSB is that of ~640MHz DDR2 system.
And how does it fare in memory latency?
*snip*
Your points about latency and FSB are not separate negatives as you have made them. They are redundant theoretical concerns with implications of unclear practical significance.
As for bandwidth, although the Mac Pro has a load of theoretical bandwidth, the efficiency is an abysmal 20%. In real use a DDR2 system has 72% more usable bandwidth. (source here (http://www.anandtech.com/mac/showdoc.aspx?i=2816&p=11))
I don't know bout you, but if I were a heavy user of memory intensive apps such as Photoshop, I'd be worried. Worried in the sense that a Conroe would be noticeably faster.
I am not worried. Everything anyone has come up with on this issue are taken from that same AnandTech article. Until I see more real-world testing, I will not be convinced. Also, I expect that more pro apps such as PS will be able to utilize quad cores in the near future, if they aren't already doing so. Finally, even if Conroe is faster, Woodcrest is fast enough for me ;).
Memory issues aside, Woodcrests are faster than Conroes, 2.4% on average (source here (http://www.anandtech.com/showdoc.aspx?i=2795&p=6))
I think you misread that. They were comparing Core 2 Extreme (not Woodcrest) and Conroe to see whether the increased FSB of the former would make much difference.