Interview with Mike Splain, CTO, chief technologist, scalable systems group, Sun Microsystems
Listen to interview with Mike Splain
Length: 9.53 minutes. File type: mp3. File size: 3.96 MB
Welcome to a new edition of Voices. I’m Joaquim Menezes, Web editor of IT World Canada, and today our “Voice” is Mike Splain chief technologist with Sun Microsystems’ Scalable Systems Group. Mike’s a 16-year veteran of Sun and as chief technologist strongly influences the technology, and operational strategies and tactics of Sun’s SPARC product line. In this interview he tells us more about Sun’s UltraSPARC T1 processor – formerly codenamed Niagara – which will debut in a new line of SunFire servers before the end of 2005.
Mike, one of the things Sun is promoting about the UltraSPARC T1 processor is its “low power usage.” We were told that it consumes less than 70 watts of power – and reminded that this is less than half the energy consumption of the Intel Xeon or IBM Power processors. What is it in the design or the architecture of the Sun UltraSPARC T1that keeps power consumption so low?
We break the processor into two pieces. There’s the thermal diagram of the cool chips, the cool threads part. And it was a blue picture. It kind of had a sort of cartoon-like diagram that showed the eight core blocks in there. So that’s C0 – C7. Those were the actual processor cores. One of the ways we’ve kept power consumption very low is that those actual core elements are very simple: a simple five-stage pipeline, single issue, simple branch predictor, very simple minimalist logic. And the rest of the chip is running at much lower clock frequencies than 1GHz for example, it’s running at half of that, or a quarter or third of that. And so by keeping things simple, and…instead of trying to make things go really fast we try to keep things always doing useful work….So if we have….units we always want to be pushing construction through those. Instead of having them wait and burn power, so that they can handle peak load work, we make them simple and keep them really busy. So it’s really a question of efficiency as opposed to anything else.
Before the launch of Niagara, the impression was the processor would mainly enable the lower-end Web-oriented tasks such as delivering Web pages. But this morning, we were told about the eight world records snagged by the SunFire T1000 and 2000 machines for Java performance, running Lotus R6, SAP software and more. Do you expect a big take up for the T1-based servers in these areas?
The economics are compelling in that area. The machines are low-priced; they are [also] fully capable of doing that. There may be some psychological resistance to that. I think the natural place for them to work well is in places that are just scaling out. And the Web tier, application tier tends to be a better, natural fit, if that’s the way to phrase it. So I think initially, just by the nature of the application, where it’s mostly communication oriented, it doesn’t iterate a lot, tends to not have a lot of resident data, cachements rates are high etc…I think the processor will get immediate uptake there. Some of the other areas that people may be a little more reluctant to actually deploy …it’s not that they won’t work well; it’s just that nobody wants to go first. You understand what I’m getting at? After there’s a sufficient user community around (say) the use of T1 as an SAP platform then I think you’re going to see other people want to do it. The question is nobody wants to be first, nobody wants to be second, everybody wants to be third.
HP had set up a Web site soon after the Niagara announcement talking about Niagara. One of the claims they made was that software would have to be re-optimized or re-architected to work well within the new UltraSPARC T1 architecture. What’s your take on that?
That HP argument…that sort of is a snapshot moment; it isn’t borne out by the facts of today. And certainly when I look ahead six to nine months to a year, it’s definitely not going to be true. We have many examples where people have taken it out of the box and deployed it and gotten those kind of things. Every once in a while there is a situation where somebody takes it out of the box and it doesn’t work exactly [the way] we expected. Invariably that isn’t because the software is broken in some way [or] has to be completely re-written; it has a lot to do with the way it’s deployed. For instance, you may be trying to run it as one giant instance versus three of four separate instances or something like that. There’s a lot of machine configuration.
I think the other dynamic on this is it is software and they call it software because it is soft. And what Niagara really is…is economic disruption. The cost per thread on Niagara is dramatically lower than on previous machines….dramatically. So suppose you were a developer and you wanted to build a threaded application that was 24-way threaded…let’s say you wanted to build a 24-way threaded machine. Think of how much money you would have had to spend to buy a computer capable of running 24 threads in parallel…that’s a large number. Hundreds of thousands of dollars probably. And you couldn’t even buy one – an x86 machine, maybe a Unisys mainframe of something – you couldn’t even buy one of those. But now for $2,995 you have access to a machine where the cost of each thread is phenomenally low. That means developers can go away and easily write a multithreaded program without the high entry cost.
What sort of impact will this shift towards hyperthreading and multi-core architectures have on software development? Will it change the way developers will be writing software in the future?
If the Internet has taught us anything…when you bring the costs of that thing down, the cost to connect – for example the cost to connect to the Internet used to be about $1,000. Today it’s about $10. That’s what the network port on whatever device you’ve got is worth. As the cost of that came down, the number of things connected to it went up by significant factor. And what we’re doing is making the cost per thread come down. Maybe it’s $50 today, going to $25, going to $10. When you get to the point where there [are] $10 threads, the software developers are going to use them. It will happen. If they stay at a $1,000 per thread like they are today, they’re not going to get used. But when you get them down to $10 a thread, the number of people using them and building software for that will change. So there’s plenty of applications today. And in the future, you know…somebody said: “what would you tell all those people out there writing code?” I’d say the uniprocessor is dead; stop writing code for one processor; write all your software for multithreaded, because that’s where the world is going to be.
Following up on an announcement made this morning that Sun would open source its T1 microprocessor…what are the implications of this? Potentially could, say an Intel take this technology and fabricate something of their own? Where is this going?
Some of that goes back to what we talked about before around license models. [We] would certainly love for them to do that. Because, again, it pushes the software in the direction that we want it to go, that we think is the right direction for it to go. Could they take all of the stuff, easily remove something called Sparc, remove that from the processor and replace it with something called x86? And my answer to that is we are not giving them all the things necessary to build…to take the chip and immediately take it out and fabricate it. If they had to make that substitution, they’d have a lot of work to do to make that substitution, and then a whole lot of work they have to process to emerge with that. And we’re not done with the T1 roadmap. And by the time they get all that done…I encourage them to go ahead and to that because I think if I were them I would definitely do that. But remember they serve a different market too. They’ve got laptops, and they’ve got other businesses. And it isn’t clear to me that their software is set up to utilize the threads the same way that the server side is set up to do. Now maybe the people building the Xeon line could adopt that as their baseline architecture. But that would be a big shift. And it would take them time to actually implement that strategy.
And we’ve got other forms of [the] Niagara (T1) processor coming, and we’ve got our Rock processor coming, so we’re confident that our implementations will stand up. But what we really want, you know in Open Sparc…sure, I hope that other companies go off and build things similar to ours. I think that’s inherently a good thing. But my big expectation is that people will use it to build all the devices that we’re not capable of building today because we have limited resources, limited market reach: storage controllers, routers, switches, embedded controllers and consumer devices. It’s a great device for all of that. That’s where the real market opportunity is. To go in and [say]: “okay let’s take it and change it and build exactly the T1 but execute another instruction set…” Okay, that’s probably interesting. Is that the biggest opportunity? I don’t think so.
You alluded to the Rock – the Sun processor that I believe is to be announced in the 2007-2008 time frame. How will Rock differ from Niagara in terms its capabilities and the market it’s going to be targeted at?
It extends the reach. It’s a little more aggressive in its single thread behaviour versus its multithread behaviour and it has other trade offs that I’m not going to get into for purposes of here. But generally it’s a very complementary processor to the one that we’ve announced today. It’s targeting a slightly…it’s that sort of…think of it as…that question you asked me before about SAP and some of the other enterprise workflows. It’s more a processor for some of those.
And that brings us to the conclusion of our interview with Mike Splain. Thanks for listening and until our next edition of Voices this is Joaquim Menezes wishing you a merry Christmas and a safe and pleasant holiday season.