|
Interview with Sun's Anil Gadre
|
Mail this article to a friend |
Editor's note: The following is a lightly edited transcription of a telephone interview with Anil Gadre, vice president of worldwide marketing at Sun Microsystems. The interviewer, Mark Cappel, had a 45-minute time limit.
When you look back at how prolific in terms of creative power, Andy Bechtolsheim, Bill Joy and Ken Okin have been, it's really pretty amazing. (Ken Okin is VP of Engineering for Desktop Products) In 1983, they put out the Sun-1 on the desktop and brought Unix to everybody. In 1989 with the SPARCstation-1, Sun redefined what the packaging was going to look like, and what the unit volumes were going to look like, and just when people thought we were going to stall, we grew market share. And remember, that was just about the time HP bought Apollo. In fact, it was the same day, April 12, 1989. People thought Sun was going to be an also-ran, but we remained a market-share leader.
At about that time, Bill, Andy, and Ken Okin and a bunch of other people were sitting around and thinking what Sun would do next right after we had introduced this product and architecture. They knew about the plans for the 64-bit chip, the UltraSPARC. So they went off to Aspen and, I like to call it the "creation myth." Because I can imagine Okin and Andy and Bill dancing around the fire or something. They were talking about the architecture. I remember raging battles going on here about what the architecture of the system was going to be. The Internet was a technical thing. There was no World Wide Web. There was no Mosaic in 1990. Yet, these guys were thinking about how the network was going to be the computer a reality.
I pulled out some notes (made at that time) the other day. These guys were asking, "What does 'the network is the computer' mean?" These guys were talking about a degree of collaboration not just between individuals, but teams, and even companies. We talk about the virtual corporation in the late 1990s, but I'm not sure the notion was all that clear in 1990. It almost seems like the architecture was required to make the virtual corporation work. That's how we came up with what we call Ultra computing.
The goal was to do exactly what we did before. As you know, in 1982, Andy took off-the-shelf parts, and with clever and innovative architecture wound up getting better performance than anyone else. In 1989, he did the same same thing with a high level of ASIC integration, packaging tricks, and through clever engineering he designed a machine that got (Sun) a blockbuster win. This time, I see four major innovations:
I get questions from analysts and the press saying "Tell me about the viability of the workstation vs high-end PCs. How are you going to out-run SGI, HP, and the rest?" We going to do it the same way we've outrun PCs for the last 10 years. Every year someone says, "the workstation market is going away." But every year it grows.
Silicon Graphics has been sitting pretty with fat margins, as compared to the margins HP and Sun have. I believe that if we do the good old-fashioned Sun price/performance challenge and bring make 3D affordable, I think you'll see some serious erosion in SGI's revenue. Its stock is starting to fall, and the product transitions are getting tough for them. (SGI is) moving from high-end products to low-end, where the margins are smaller.
HP gets a lot more dollars for the unit volume than Sun does, implying HP sells higher-end systems. We think we have the price/performance challenge that not only brings Sun (performance equality) but superior performance. And not just at the chip-level. Price-points in this market are going to get interesting.
We deliver 100% binary compatibility. We've been through our trials and tribulations on that angle. We're finding from ISVs that there's no porting. They just verify the application runs. They are finding that it runs two and a half to four times faster than a SPARCstation 20. There are no changes in code. No QA cycles. You use the code that's running. If you do want to recompile, there (are advantages.)
Three areas come to mind immediately.
If you want the best MCAD, 3D graphics, you got to go to HP. If you want the best imaging or texture mapping, you got to go to Silicon Graphics. If you want the best power, per se, it's the Alpha. If you want the best Unix and the best networking, you come to Sun. Not that our graphics are bad, it's just that if you want the best you have to go over there. You wind up compromising. Pardon the marketing hype, this is the first time users truly do not have to compromise. In one Sun workstation, you get the best graphics, the best imaging, video, texture mapping and application performance and superior Unix, 10,000 apps, etc.
Good old-fashioned price/performance positioning by Sun, that's where we are going to score the home run, and clever engineering.
The issue of "has it slipped lately" is all in how you count things. I've had long discussions with Dataquest and IDC. It depends on if they want to throw in the huge pile of NT machines. Without NT we continue to have 39 percent share of the market.
If you look at how fast our unit share has grown compared HP SGI and so on, we continue to do extremely well. We poised to gain even more share in the coming months. We finally have the performance, the graphics, increased network performance, compared to these other guys. I think HP and SGI are going to have a very hard time responding to us on this one.
TI is not blame for all of the problems with SuperSPARC. We have learned an awful lot in how you design a high-end, multi-processor. This time, TI and Sun have collaborated in learning from the mistakes of the past. We're more confident in (UltraSPARC) giving us immediate scalability in frequency that we've desired. That was the whole story behind Viking (Sun's code-name for SuperSPARC), that we were going to get the performance gains quarter-to-quarter and be like the Intel architecture. It was very difficult do. In order to get performance improvement you had to go through more than just a shrink, but a relaying out of the chip. Relaying a chip is a year-long exercise.
In this case, it looks like we've surpassed all of the hurdles. I am not living through any of the gut-wrenching meetings I lived through when we were about to introduce SuperSPARC. This is about as clean as you can imagine, and that's because these guys have much much more attention to the simulation tools, making sure eh tools test speed, and not just functionality. With SuperSPARC, we had great tools that told us the chip would work, but it didn't test all of the corner cases for performance. This time they've used the simulation tools that gave them a sense of performance and speed ranges, in addition to just 'hey is it going to work the first time through?'
Learning from your mistakes. By the way. one of the things we've learned is how to make multi-processor machines. Look at happened to IBM recently dropping the 620. The 604 is a simpler design for them to do. But they did not master the scalability. The 620 is a much more complex architecture. I guess you might say we've paid the heavy price in the last three years, with the SPARCstation 10, in learning how SMP works. At a chip level and system level, conquering SMP technology is going to position well in the next four to six years. Compared to what IBM and HP are going through.
TI has been an excellent partner in living with us through those trials.
I don't have firm numbers, but it's much better than SuperSPARC at the same point in its life. A good example, more than month ago we started providing review units (to ISVs and journalists). With SuperSPARC, man we were down to the wire. I don't think we got review units to editors until after the launch.
We've matured as a company. We're spending more attention to 100 percent compatibility and quality before the release. We're a bigger company.
It's something we are going to have to decide in the next six months. There are plans to push forward on the state of the art with microSPARC, potentially even do more on HyperSPARC with follow-on chips beyond the 150-MHz. Those product lines will not disappear overnight. It depends on how fast we can push down UtraSPARC's price and scalability. The dream would be that it scales so quickly that we are getting 200-MHz or more parts quickly for higher-end systems, and be able to get the yield up to get a lot more parts at much lower cost. We won't know that for another three or four months.
That was debated here in the last couple of years. In the end what won out was at an entry point, we need to build the least-cost machine. How far would you want to push the entry-level product in terms of price? You don't want to constrain it with the additional cost of a lot of hardware that doesn't sound like much. But by the time you add the power supply, more memory, and start adding it all up....
We are going to have a multi-processor in the product line. We believe in bringing MP down in price as far as we can, but you really have to have an entry-level box where a single 143-MHz or 160-MHz processor is good enough and the customer is beating you up for it.
We solved our problem by way of having the VIS instruction set, and architecturally putting something like the UPA and so on, which Be is solving with co-processors.
If you have problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-11-1995/swol-11-fusion.gadre.html
Last updated: 8 November 1995
|
If you have technical problems with this magazine, contact webmaster@sunworld.com
URL: http://www.sunworld.com/swol-11-1995/swol-11-fusion.gadre.html
Last modified: