The question was familiar, but it was the first time a reporter had asked me to go on the record since I left PalmSource. She said, "Given all the uncertainty about Palm, should people avoid buying their products?"
I asked what uncertainty she meant.
"You know, all the uncertainty about what they're doing with Microsoft. It's the same as RIM Blackberry, where people say you shouldn't buy because of the uncertainty about their patents."
I'm glad to say that my answer today is the same as it was back when I worked at PalmSource: buy what you need and don't worry about what people say.
If you look for uncertainty, you can find it for any mobile product on the market. Symbian's losing licensees. Microsoft has lost licensees (and missed a few shipment deadlines). Both platforms are extremely dependant on single hardware companies -- Nokia for Symbian and HTC for Microsoft. And for every device there's always a new model or a new software version about to obsolete the current stuff.
Obviously, if a company's on the verge of bankruptcy, you should be careful. But Palm is profitable, and they know how loyal Palm OS users are. They have a huge financial incentive to keep serving those users as long as the users want to buy.
RIM is a slightly more intimidating issue because you never know what the government might do. One week RIM's on the brink of ruin, the next week the patent office is about to destroy the whole case against them. This sort of unpredictability doesn't encourage innovation and investment, which I thought was the whole point of the patent system. What we have now seems more like playing the lottery. But I don't think it's going to lead to a shutdown of RIM's system. If NTP destroys RIM's business, there won't be anything left to squeeze money out of. I think what you're seeing now is brinksmanship negotiation from both sides. It's entertaining, but not something you should base a purchase decision on.
Yes, you can get yourself worked up about risk on a particular platform if you want to. But that level of risk is miniscule compared to the near certainty you'll be disappointed if you buy a "safe" product that doesn't really do what you need. There's much more diversity in the mobile market than there is in PCs. One brand often isn't a good substitute for another, and a product that's appealing to one person may be repulsive to another.
If you're an individual user, you could easily talk yourself into buying a device you'll hate every day. If you're an IT manager specifying products for your company, you could easily end up deploying products that employees won't use.
The safest thing to do is ignore the commentators and buy the device that best meets your needs.
NTT DoCoMo buys 11.66% of Palm OS. Watch this space.
Posted by
Andy
at
12:30 AM
When I worked at Palm, I was always amazed at how different the mobile market looked in various parts of the world. Although human beings are basically the same everywhere, the mobile infrastructure (key companies, government regulation, relative penetration of PCs, local history) is dramatically different in every country, and so the markets behave very differently. Even within Europe, the use and adoption of mobile technology varies tremendously from country to country.
And then there's Japan, which has its own unique mobile ecosystem that gets almost completely ignored by the rest of the world, even though a lot of the most important mobile trends started there first (cameraphones, for example).
I try to keep tabs on Japan through several websites that report Japanese news in English. Two are Mobile Media Japan and Wireless Watch Japan. They both post English translations of Japanese tech news, and you find all sorts of interesting tidbits that are almost completely beneath the radar in the US.
Case in point: at the end of November, NTT DoCoMo announced that it is raising its ownership of Access Corp from 7.12% of the company's stock to 11.66%, for a price of about $120 million. That's right, the same Access that just bought PalmSource. So DoCoMo, one of the world's most powerful operators, now owns 11.66% of Palm OS and the upcoming Linux product(s). This story got passing mentions on a couple of enthusiast bulletin boards, but I didn't see anything about it anywhere else.
It's possible that the DoCoMo investment has nothing to do with Palm OS or Linux. Access provides the browser for a lot of DoCoMo phones, and it frequently subsidizes suppliers in various ways for custom development. But $120 million is a lot for just customizing a browser…
DoCoMo is a strong supporter of mobile Linux for 3G phones, and you have to assume that Access is in there pitching its upcoming OS. I can just picture the conversation: "You already get the browser from us, why don't we just bundle it with the OS for one nice low fee?" Meanwhile, Panasonic just dropped Symbian and plans to refocus all its phone development on 3G phones and mobile Linux. You have to figure Access is talking to them as well.
I don't think most of the mobile observers in the US and Europe realize how intense the interest is in mobile Linux in Asia. A lot of very large companies are putting heavy investment into it. I'm sure this is why Access was willing to pay more than double PalmSource's market value to buy the company.
It's going to be very interesting to see what Access does to make that investment pay off in 2006.
(PS: In case you're wondering, I have no ties to PalmSource/Access and no motivation to hype their story. I just want folks to understand that the mobile OS wars aren't even close to over.)
And then there's Japan, which has its own unique mobile ecosystem that gets almost completely ignored by the rest of the world, even though a lot of the most important mobile trends started there first (cameraphones, for example).
I try to keep tabs on Japan through several websites that report Japanese news in English. Two are Mobile Media Japan and Wireless Watch Japan. They both post English translations of Japanese tech news, and you find all sorts of interesting tidbits that are almost completely beneath the radar in the US.
Case in point: at the end of November, NTT DoCoMo announced that it is raising its ownership of Access Corp from 7.12% of the company's stock to 11.66%, for a price of about $120 million. That's right, the same Access that just bought PalmSource. So DoCoMo, one of the world's most powerful operators, now owns 11.66% of Palm OS and the upcoming Linux product(s). This story got passing mentions on a couple of enthusiast bulletin boards, but I didn't see anything about it anywhere else.
It's possible that the DoCoMo investment has nothing to do with Palm OS or Linux. Access provides the browser for a lot of DoCoMo phones, and it frequently subsidizes suppliers in various ways for custom development. But $120 million is a lot for just customizing a browser…
DoCoMo is a strong supporter of mobile Linux for 3G phones, and you have to assume that Access is in there pitching its upcoming OS. I can just picture the conversation: "You already get the browser from us, why don't we just bundle it with the OS for one nice low fee?" Meanwhile, Panasonic just dropped Symbian and plans to refocus all its phone development on 3G phones and mobile Linux. You have to figure Access is talking to them as well.
I don't think most of the mobile observers in the US and Europe realize how intense the interest is in mobile Linux in Asia. A lot of very large companies are putting heavy investment into it. I'm sure this is why Access was willing to pay more than double PalmSource's market value to buy the company.
It's going to be very interesting to see what Access does to make that investment pay off in 2006.
(PS: In case you're wondering, I have no ties to PalmSource/Access and no motivation to hype their story. I just want folks to understand that the mobile OS wars aren't even close to over.)
Is Symbian’s ownership a house of cards?
Posted by
Andy
at
12:24 AM
I think it's very likely that recent changes in Symbian-land will open a new chapter in the soap opera over the company's ownership structure. At the end, there will probably either be some significant new Symbian owners, or Nokia will finally be majority owner of the company.
Either development will generate a lot of public discussion, hand-wringing, and general angst, but probably won't make any meaningful difference in the company's behavior and fortunes. It should be entertaining, though. Here's what to watch for.
Symbian is owned by a consortium of mobile phone companies:
Nokia 47.9%
Ericsson 15.6%
SonyEricsson 13.1%
Panasonic 10.5%
Siemens 8.4%
Samsung 4.5%
Psion, the company that invented Symbian OS, used to be a major owner. But last year it pulled out and announced plans to sell its shares to Nokia, which would have ended up owning more than 60% of the company. There was a huge fuss, with many people saying that if Nokia owned more than 50% of the shares, Symbian would no longer be an independent platform but a slave to Nokia's whims. In the end, Psion's shares were split among Nokia, SonyEricsson, Panasonic, and Siemens, with Nokia getting 47.9%. Crisis averted.
But the big fuss was actually kind of a sideshow, because under the rules of Symbian's governance, major initiatives must be approved by 70% of the ownership. In other words, you have to own 70% of the company to bend it totally to your will, and if you have just 30% ownership you can veto any major plan.
Guess what -- Nokia was and remains the only partner with more than 30% ownership. So it already exercises huge influence over what Symbian does. But it won't have full control unless and until it hits 70%. As far as I can tell, what bugged the other Symbian partners was the impression people would get if Nokia owned 50% or more of Symbian, and that's what they were fighting over in 2004.
Fast forward to today. Two of Symbian's owners, companies that played a key role in the rescue mission of 2004, are apparently dropping Symbian OS. Panasonic just dramatically restructured its mobile phone business, focusing solely on 3G phones and stopping all Symbian development efforts in favor of Linux. In mid-2005 Siemens sold its mobile phone division, including its Symbian shares, to Benq. Siemens and Benq were both Symbian licensees, and both have reportedly had serious problems with product delays and poor Symbian sales. There have been persistent rumors in the Symbian community that Benq is dropping the OS, and DigiTimes, a Taiwanese publication that has a history of getting insider stories, just ran an article quoting anonymous sources saying the same thing. For the record, I should let you know that I could not find an official announcement about this by Benq. But the continuing reports and rumors are very ominous.
The implications of all this haven't been discussed much online, but let's face it -- how long will Benq and Panasonic want to remain part owners in an OS they no longer use and that damaged their phone businesses? That means 18.9% of Symbian is likely to be for sale in the near future (if it isn't already).
What will happen?
It's possible that a transfer of ownership will be negotiated behind the scenes, in which case Symbian might avoid another big public debate. That would be much better for the company and its partners, but less fun for people who write weblogs.
Whether the change happens in public or private, I think there are several possibilities for the future ownership structure:
--New owners buy the shares. I think the most likely candidates would be NTT DoCoMo and Fujitsu. DoCoMo is shipping a lot of 3G Symbian phones in Japan, and Fujitsu makes most of them. Sharp is also a candidate, I guess; it's just started offering Symbian phones. I don't know how the other Symbian partners (and other operators) would feel about DoCoMo owning part of the company. My guess is the operators would be profoundly uncomfortable, and would be less willing to carry Symbian phones. So Fujitsu and maybe Sharp are probably the best bets.
--Nokia buys the 18.9%. This would give it just under 70% ownership. In other words, there would be no practical change in Symbian's governance, but the optics would scare a lot of licensees. In particular, SonyEricsson has expressed strong discomfort with Nokia getting more than 50%. In the analyst sales estimates I used to get while at PalmSource, shipments of SonyEricsson's Symbian phones had been pretty much flat for a very long time, and the company doesn't seem anxious to put the OS in a lot more models. Increased Nokia ownership of Symbian might be the last straw that would drive S-E away from the OS completely. In that case Nokia would probably have to step in and buy even more of the company.
--Symbian goes public. For years Symbian employees were told that the company was headed toward a public offering, which generated a lot of excitement among them. But eventually the phone companies refused to surrender their control over Symbian and the IPO talk died out. If Nokia were unwilling to take on even more ownership of Symbian, and other investors could not be found, then I guess it's possible that all the owners might decide they want to cash out. This would be exciting for Symbian employees for a little while, but I think that if Nokia didn't control Symbian it would be less willing to use the OS in the long term. So Symbian might go public just in time to evaporate. Which leads me to the last scenario:
--Nokia dumps Symbian and the whole thing falls apart. I don't think this is likely, but phone industry analysis company ARC Chart estimates that Nokia's currently paying Symbian more than $100 million a year for the privilege of bundling Symbian OS with a lot of its phones, and the more Nokia uses Symbian, the more money it owes. It's clear that Nokia is deeply committed to Series 60, its software layer running on top of Symbian. But Series 60 could (with a huge amount of work) be ported to run on top of something else.
Having lived through a couple of OS migrations at Apple and Palm, I should emphasize that changing your OS is very easy for an analyst to write about and very, very, very hard to do in reality. You spend years (literally) rewriting plumbing rather than innovating. It's not something you do unless you have a lot of incentive.
But saving $100 million a year is a pretty big incentive…
The other question people will ask is what this means to phone buyers. Should you avoid Symbian devices because of this uncertainty? My answer: absolutely not. Buy the device that best meets your needs best, regardless of OS. If you avoided every smartphone that has uncertainty about it, you wouldn't be able to buy anything. I'm going to come back to this subject in a future post.
Either development will generate a lot of public discussion, hand-wringing, and general angst, but probably won't make any meaningful difference in the company's behavior and fortunes. It should be entertaining, though. Here's what to watch for.
Symbian is owned by a consortium of mobile phone companies:
Nokia 47.9%
Ericsson 15.6%
SonyEricsson 13.1%
Panasonic 10.5%
Siemens 8.4%
Samsung 4.5%
Psion, the company that invented Symbian OS, used to be a major owner. But last year it pulled out and announced plans to sell its shares to Nokia, which would have ended up owning more than 60% of the company. There was a huge fuss, with many people saying that if Nokia owned more than 50% of the shares, Symbian would no longer be an independent platform but a slave to Nokia's whims. In the end, Psion's shares were split among Nokia, SonyEricsson, Panasonic, and Siemens, with Nokia getting 47.9%. Crisis averted.
But the big fuss was actually kind of a sideshow, because under the rules of Symbian's governance, major initiatives must be approved by 70% of the ownership. In other words, you have to own 70% of the company to bend it totally to your will, and if you have just 30% ownership you can veto any major plan.
Guess what -- Nokia was and remains the only partner with more than 30% ownership. So it already exercises huge influence over what Symbian does. But it won't have full control unless and until it hits 70%. As far as I can tell, what bugged the other Symbian partners was the impression people would get if Nokia owned 50% or more of Symbian, and that's what they were fighting over in 2004.
Fast forward to today. Two of Symbian's owners, companies that played a key role in the rescue mission of 2004, are apparently dropping Symbian OS. Panasonic just dramatically restructured its mobile phone business, focusing solely on 3G phones and stopping all Symbian development efforts in favor of Linux. In mid-2005 Siemens sold its mobile phone division, including its Symbian shares, to Benq. Siemens and Benq were both Symbian licensees, and both have reportedly had serious problems with product delays and poor Symbian sales. There have been persistent rumors in the Symbian community that Benq is dropping the OS, and DigiTimes, a Taiwanese publication that has a history of getting insider stories, just ran an article quoting anonymous sources saying the same thing. For the record, I should let you know that I could not find an official announcement about this by Benq. But the continuing reports and rumors are very ominous.
The implications of all this haven't been discussed much online, but let's face it -- how long will Benq and Panasonic want to remain part owners in an OS they no longer use and that damaged their phone businesses? That means 18.9% of Symbian is likely to be for sale in the near future (if it isn't already).
What will happen?
It's possible that a transfer of ownership will be negotiated behind the scenes, in which case Symbian might avoid another big public debate. That would be much better for the company and its partners, but less fun for people who write weblogs.
Whether the change happens in public or private, I think there are several possibilities for the future ownership structure:
--New owners buy the shares. I think the most likely candidates would be NTT DoCoMo and Fujitsu. DoCoMo is shipping a lot of 3G Symbian phones in Japan, and Fujitsu makes most of them. Sharp is also a candidate, I guess; it's just started offering Symbian phones. I don't know how the other Symbian partners (and other operators) would feel about DoCoMo owning part of the company. My guess is the operators would be profoundly uncomfortable, and would be less willing to carry Symbian phones. So Fujitsu and maybe Sharp are probably the best bets.
--Nokia buys the 18.9%. This would give it just under 70% ownership. In other words, there would be no practical change in Symbian's governance, but the optics would scare a lot of licensees. In particular, SonyEricsson has expressed strong discomfort with Nokia getting more than 50%. In the analyst sales estimates I used to get while at PalmSource, shipments of SonyEricsson's Symbian phones had been pretty much flat for a very long time, and the company doesn't seem anxious to put the OS in a lot more models. Increased Nokia ownership of Symbian might be the last straw that would drive S-E away from the OS completely. In that case Nokia would probably have to step in and buy even more of the company.
--Symbian goes public. For years Symbian employees were told that the company was headed toward a public offering, which generated a lot of excitement among them. But eventually the phone companies refused to surrender their control over Symbian and the IPO talk died out. If Nokia were unwilling to take on even more ownership of Symbian, and other investors could not be found, then I guess it's possible that all the owners might decide they want to cash out. This would be exciting for Symbian employees for a little while, but I think that if Nokia didn't control Symbian it would be less willing to use the OS in the long term. So Symbian might go public just in time to evaporate. Which leads me to the last scenario:
--Nokia dumps Symbian and the whole thing falls apart. I don't think this is likely, but phone industry analysis company ARC Chart estimates that Nokia's currently paying Symbian more than $100 million a year for the privilege of bundling Symbian OS with a lot of its phones, and the more Nokia uses Symbian, the more money it owes. It's clear that Nokia is deeply committed to Series 60, its software layer running on top of Symbian. But Series 60 could (with a huge amount of work) be ported to run on top of something else.
Having lived through a couple of OS migrations at Apple and Palm, I should emphasize that changing your OS is very easy for an analyst to write about and very, very, very hard to do in reality. You spend years (literally) rewriting plumbing rather than innovating. It's not something you do unless you have a lot of incentive.
But saving $100 million a year is a pretty big incentive…
The other question people will ask is what this means to phone buyers. Should you avoid Symbian devices because of this uncertainty? My answer: absolutely not. Buy the device that best meets your needs best, regardless of OS. If you avoided every smartphone that has uncertainty about it, you wouldn't be able to buy anything. I'm going to come back to this subject in a future post.
Quick notes: a computing radio show, and custom shoes on the web
Posted by
Andy
at
10:10 PM
Computer Outlook is a syndicated radio program that covers various computing topics (it's also streamed over the Internet, so you can hear it by going to the website). They did a boradcast live from the last PalmSource developer conference, and I had a nice time talking with them at the end of the conference. Last month they asked me to come on the show again. We had fun talking about various topics, mostly mobility-related. They've posted a recording of the program here.
This second item has only the most tenuous connection to mobile computing, but I'm posting it anyway because I think it's cool. You can now design your own Jack Purcells tennis shoes online. The revolutionary importance of this is probably going to be lost on...well, just about everyone reading this, so let me give you a little context. Jack Purcells are tennis shoes that first became famous in the 1930s because they were endorsed by a famous badminton player, Jack Purcell. (Why they're not called badminton shoes, I don't know.) The design has barely changed since then, and today they are just about the most primitive tennis shoes you'll ever see, basically a flat slab of rubber with stitched canvas glued on top. When I was a kid, they were quite popular, and there's a famous photo of James Dean wearing a pair. Gnarly. Unfortunately, in the 1980s, with the rise of sophisticated shoes from Nike and others, Jack Purcells almost completely disappeared from the market.
And yet they never quite completely disappeared. Sometime in the 1990s they became a hot ticket in the Hip Hop community. The Urban Dictionary put it best: "Eternally hip and understated, this is the maverick shoe of simplicity. Its design is has been virtually unchanged since the 1930's. It is a clean and bold casual court shoe and its subtleness has transcended time....Converse All-Stars are cool...but the coolest people on the planet wear Jack Purcells."
The coolest people on the planet, okay?
In an ironic twist, Nike bought the Converse and Jack Purcells brands a couple of years and has been putting a lot of investment into them. The most interesting thing Nike has done is its online shoe design engine, which lets you custom-design your own pair of Jack Purcells. You can pick the colors for everything from the rubber sole to the stitching in the canvas, and they'll also monogram the shoes for you.
Here's the shoe I designed:
If this were the Long Tail blog, I'd wax rhapsodic about how the Web lets individuals get exactly the shoes they want. But I'll leave that to others. I do think it's interesting that there's a customizer for Nike shoes as well, but to me it's a lot less appealing because Nikes are so diverse already. The fun thing about custom Jack Purcells is that there's a single well-understood design that you get to do your own riff on. It's kind of like customizing a '67 Mustang.
If you want to create your own Jacks, follow this link and hover over "Design your own." In addition to having fun with the shoes, you'll be exposed to one of the nicest animated sites I've seen on the Web. And that's really why I posted about this. Although the shoes themselves are fun, what I admire most is how Nike used the Web to make the world's most primitive tennis shoes feel cutting edge.
This second item has only the most tenuous connection to mobile computing, but I'm posting it anyway because I think it's cool. You can now design your own Jack Purcells tennis shoes online. The revolutionary importance of this is probably going to be lost on...well, just about everyone reading this, so let me give you a little context. Jack Purcells are tennis shoes that first became famous in the 1930s because they were endorsed by a famous badminton player, Jack Purcell. (Why they're not called badminton shoes, I don't know.) The design has barely changed since then, and today they are just about the most primitive tennis shoes you'll ever see, basically a flat slab of rubber with stitched canvas glued on top. When I was a kid, they were quite popular, and there's a famous photo of James Dean wearing a pair. Gnarly. Unfortunately, in the 1980s, with the rise of sophisticated shoes from Nike and others, Jack Purcells almost completely disappeared from the market.
And yet they never quite completely disappeared. Sometime in the 1990s they became a hot ticket in the Hip Hop community. The Urban Dictionary put it best: "Eternally hip and understated, this is the maverick shoe of simplicity. Its design is has been virtually unchanged since the 1930's. It is a clean and bold casual court shoe and its subtleness has transcended time....Converse All-Stars are cool...but the coolest people on the planet wear Jack Purcells."
The coolest people on the planet, okay?
In an ironic twist, Nike bought the Converse and Jack Purcells brands a couple of years and has been putting a lot of investment into them. The most interesting thing Nike has done is its online shoe design engine, which lets you custom-design your own pair of Jack Purcells. You can pick the colors for everything from the rubber sole to the stitching in the canvas, and they'll also monogram the shoes for you.
Here's the shoe I designed:
If this were the Long Tail blog, I'd wax rhapsodic about how the Web lets individuals get exactly the shoes they want. But I'll leave that to others. I do think it's interesting that there's a customizer for Nike shoes as well, but to me it's a lot less appealing because Nikes are so diverse already. The fun thing about custom Jack Purcells is that there's a single well-understood design that you get to do your own riff on. It's kind of like customizing a '67 Mustang.
If you want to create your own Jacks, follow this link and hover over "Design your own." In addition to having fun with the shoes, you'll be exposed to one of the nicest animated sites I've seen on the Web. And that's really why I posted about this. Although the shoes themselves are fun, what I admire most is how Nike used the Web to make the world's most primitive tennis shoes feel cutting edge.
"Software as a service" misses the point
Posted by
Andy
at
3:44 PM
At the end of October, Microsoft's Ray Ozzie and Bill Gates wrote internal memos announcing that Microsoft must pursue software services. The memos were leaked to the public, I believe intentionally. They drove enormous press coverage of Microsoft's plans, and of the services business model in general.
Most of the coverage focused on two aspects of software as services: downloading software on demand rather than pre-installing it; and paying for it through advertising rather than retail purchase.
Here are two examples of the coverage. The first is from The Economist:
"At heart, said Mr Ozzie, Web 2.0 is about 'services' (ranging from today's web-based e-mail to tomorrow's web-based word processor) delivered over the web without the need for users to install complicated software on their own computers. With a respectful nod to Google, the world's most popular search engine and Microsoft's arch-rival, Mr Ozzie reminded his colleagues that such services will tend to be free—ie, financed by targeted online advertising as opposed to traditional software-licence fees."
Meanwhile, the New York Times wrote, "if Microsoft shrewdly devises, for example, online versions of its Office products, supported by advertising or subscription fees, it may be a big winner in Internet Round 2."
I respect the Times and love the Economist, but in this case I think they have missed the point, as have most of the other media commenting on the situation. The advertising business model is important to the Microsoft vs. Google story because ads are where Google makes a lot of revenue, and Microsoft wants that money. But the really transformative thing happening in software right now isn't the move to a services business model, it's the move to an atomized development model. The challenge extends far beyond Microsoft. I think most of today's software companies could survive a move to advertising, but the change in development threatens to obsolete almost everything, the same way the graphical interface wiped out most of the DOS software leaders.
The Old Dream is reborn
The idea of component software has been around for a long time. I was first seduced by it in the mid 1990s, when I was at Apple. One of the more interesting projects under development there at the time was something called OpenDoc. In typical Apple fashion, different people had differing visions on what OpenDoc was supposed to become. Some saw it as primarily a compound document architecture -- a better way to mix multiple types of content in a single document. Other folks, including me, wanted it to grow into a more generalized model for creating component software -- breaking big software programs down into a series of modules that could be mixed and matched, like Lego blocks.
For example, if you didn't like the spell-checker built into your word processor, you could buy a new one and plug it in. Don't like the way the program handles footnotes? Plug in a new module. And so on.
The benefit was supposed to be much faster innovation (because pieces of an app could be revised independently), and a market structure that encouraged small developers to build on each others' work. Unfortunately, like many other things Apple did in the 1990s, OpenDoc was never fully implemented and it faded away.
But the dream of components as a better way to build software has remained. Microsoft implemented part of it in its .Net architecture -- companies can develop software using modules that are mixed and matched to create applications rapidly. But the second part of the component dream, an open marketplace for mixing and matching software modules on the fly, never happened on the desktop. So the big burst in software innovation that we wanted to drive never happened either. Until recently.
The Internet is finally bringing the old component software dream to fruition. Many of the best new Internet applications and services look like integrated products, but are actually built up of components. For example, Google Maps consists of a front end that Google created, running on top of a mapping database created by a third party and exposed over the Internet as a service. Google is in turn enabling companies to build more specialized services on top of its mapping engine.
WordPress is my favorite blogging tool in part because of the huge array of third party plug-ins and templates for it. Worried about comment spam? There are several nice plug-ins to fight it. Want a different look for your blog? Just download a new template.
You have to be a bit technical to make it all work, but the learning curve's not steep (hey, I did it). For people who are technical, the explosion of mix and match software and tools on the web is an incredible productivity multiplier. I had lunch recently with a friend who switched from mobile software development to web services in part because he could get so much more done in the web world. To create a new service, he could download an open source version of a baseline service, make a few quick changes to it, add some modules from other developers, and have a prototype product up and running within a couple of weeks. That same sort of development in the traditional software world would have taken a large team of people and months of work.
This feeling of empowerment is enough to make a good programmer giddy. I think that accounts for some of the inflated rhetoric you see around Web 2.0 -- it's the spill-over from a lot of bright people starting to realize just how powerful their new tools really are. I think it's also why less technical analysts have a hard time understanding all the fuss over Web 2.0. The programmers are like carpenters with a shop full of shiny new drills and saws. They're picturing all the cool furniture they can build. But the rest of us say, "so what, it's a drill." We won't get it until more of the furniture is built.
The other critical factor in the rise of this new software paradigm is open source. When I was scheming about OpenDoc, I tried to figure out an elaborate financial model in which developers could be paid a few dollars a copy for each of their modules, with Apple or somebody else acting as an intermediary. It was baroque and probably impractical, but I thought it was essential because I never imagined that people might actually develop software modules for free.
OpenSource gets us past the whole component payment bottleneck. Instead of getting paid for each module, developers share a pool of basic tools that they can use to assemble their own projects quickly, and they focus on just getting paid for those projects. For the people who know how to work this way, the benefits far outweigh the cost of sharing some of your work.
The Rise of the Mammals
Last week I talked with Carl Zetie, a senior analyst at Forrester Research. Carl is one of the brightest analysts I know (I'd point you to his blog rather than his bio, but unfortunately Forrester doesn't let its employees blog publicly).
Carl watches the software industry very closely, and he has a great way of describing the change in paradigm. He sees the world of software development breaking into two camps:
The old paradigm: Large standard applications. This group focuses on the familiar world of APIs and operating systems, and creates standalone, integrated, feature-complete applications.
The new paradigm: Solutions that are built up out of small atomized software modules. APIs don't matter very much here because the modules communicate through metadata. This group changes standards promiscuously (they can be swapped in and out because the changes are buffered by the use of metadata). Carl cited the development tool Eclipse as a great example of this world; the tool itself can be modified and adapted ad hoc.
I think the second group is going to displace the first group, because the second group can innovate so much faster. It'll take years to play itself out, but it's like the mice vs. the dinosaurs, only this time the mice don't need an asteroid to give them a head start.
This situation is very threatening for the established software companies. Almost all of the big ones are based on old-style development, using large teams of programmers to create ponderous software programs with every feature you could imagine. The scale of their products alone has been a huge barrier to entry -- you'd have to duplicate all the features of a PowerPoint or an Illustrator before you could even begin to attack it. Few companies can afford that sort of up-front investment.
But the component paradigm, combined with open source, turns that whole situation on its head. The heavy features of a big software program become a liability -- because the program's so complex, you have to do an incredible amount of testing anytime you add a new feature. The bigger the program becomes, the more testing you have to do. Innovation gets slower and slower. Meanwhile, the component guys can sprint ahead. Their first versions are usually buggy and incomplete, but they improve steadily over time. Because their software is more loosely coupled, they can swap modules without rewriting everything else. If one module turns out to be bad, they just toss it out and use something else.
There are drawbacks to the component approach, of course. For mission-critical applications that require absolute reliability, something composed of modules from various vendors is scary. Support can also be a problem -- when an application breaks, how do you determine which component is at fault? And it's hard (almost laughable at this point) to picture a major desktop application replaced by today's generation of online modules and services. The ones I've seen are far too primitive to displace a mainstream desktop app today.
But I think the potential is there. The online component crowd is systematically working through the problems, and if you project out their progress for five years or so, I think there come a time when their more rapid innovation will outweigh the integration advantages of traditional monolithic software. Components are already winning in online consumer services (that's where most of the Web 2.0 crowd is feeding today), and there are some important enterprise products. Over time I think the component products will eat their way up into enterprise and desktop productivity apps.
In this context, the fuss about software you can download on the fly, and support through advertising, is a sideshow. For many classes of apps it will be faster to use locally cached software for a long time to come, and I don't know if advertising in a productivity application will ever make much sense. But I'm certain that the change in development methodology will reshape the software industry. The real game to watch isn't ad-supported services vs. packaged software, it's atomized development vs. monolithic development.
What does it all mean?
I think this has several very important implications for the industry:
The big established software companies are at risk. The new development paradigm is a horrible challenge for them, because it requires a total change in the way they create and manage their products. Historically, most computing companies haven't survived a transition of this magnitude, and much of the software industry doesn't even seem to be fully aware of what's coming. For example, I recently saw a short note in a very prominent software newsletter, regarding Ruby on Rails (an open source web application framework, one of the darlings of the Web 2.0 crowd). "It's yet another Internet scripting language," the newsletter wrote. "We don't know if it's important. Here are some links so you can decide for yourself."
I guess I have to congratulate them for writing anything, but what they did was kind of like saying, "here's a derringer pistol, Mr. Lincoln. Don't know if it's important or not, but you might want to read about it."
Some software companies are trying to react. I believe the wrenching re-organization that Adobe's putting itself through right now is in part a reaction to this change in the market. The re-org hasn't gotten much coverage -- in part because Adobe hasn't released many details, and in part because the press is obsessed with Google vs. Microsoft. But Adobe has now put Macromedia general managers in charge of most of its business units, displacing a lot of long-time Adobe veterans who are very bitter about being ousted two weeks before Christmas, despite turning in good profits. I've been getting messages from friends at Adobe who have been laid off recently, and all of them say they were pushed aside for Macromedia employees. "It's a reverse acquisition," one friend told me.
I personally think what Adobe's doing is grafting Macromedia's Internet knowledge and reflexes into a company that has been very focused on its successful packaged software franchises. It's going to be a painful integration process, but the fact that Adobe's willing to put itself through this tells you how important the change is. Better to go through agonizing change now than to lose the whole company in five years.
What does Microsoft do? In the old days, Microsoft's own extreme profitability made it straightforward for the company to win a particular market. Microsoft could spend money to bleed the competitor (for example, give away the browser vs. Netscape), while it worked behind the scenes to duplicate and then surpass the competitor's product. But the component software crowd doesn't want to take over Microsoft's revenue stream; it wants to destroy about 90% of it, and then can be very successful living off the remaining 10% or so. To co-opt their tactics, Microsoft would have to destroy most of its own revenue.
Here's a simplified example of what I mean: some of the component companies are developing competitors to the Office apps. A couple of examples are Writely and JotSpot's Tracker. Microsoft could fight them by trimming down Word and Excel into lightweight frameworks and inviting developers to extend them. The trouble is that you can't charge a traditional Word or Excel price for a basic framework; if you do, competing frameworks will beat you on price. And if you enable third parties to make the extensions, then they'll get any revenue that comes from extensions. I don't see how Microsoft could sell enough advertising on a Word service to make up the couple of hundred dollars in gross margin per user that it gets today from Office (and that it gets to collect again every time it upgrades Office).
The Ozzie memo seems to suggest that Microsoft will try to integrate across its products, to make them complete and interoperable in ways that will be very hard for the component crowd to copy. But that adds even more complexity to Microsoft's development process, which is already becoming famous for slowness. If you gather all the dinosaurs together into a herd, that doesn't stop the mice from eating their eggs.
I wonder if senior management at Microsoft sees the scenario this starkly. If so, a logical approach might be to make an all-out push to displace the Google search engine and take over all of Google's advertising business, to offset the coming loss of applications revenue. Will Microsoft suddenly offer to install free WiFi in every city in the world? Don't laugh; historically, when Microsoft felt truly threatened it was willing to take radical action. Years ago the standard assumption was that Gates and Ballmer utterly controlled Microsoft -- they held enough of the company's stock that they could ignore the other shareholders if they had to. I'm not sure if that's still true. Together the stock holdings of Gates and Ballmer have dropped to about 13% of the company. Microsoft execs hold another 14%, and Paul Allen has about 1%. Taken together, that's 28%. Is that enough to let company management make radical moves, even at the expense of short-term profits? I don't know. But I wouldn't bet against it.
The rebirth of IT? The other interesting potential impact was pointed out to me by a co-worker at Rubicon Consulting, Bruce La Fetra. Just as new software companies can become more efficient by working in the component world, companies can gain competitive advantage by aggressively using the new online services and open source components for their own in-house development. But doing so requires careful integration and support on the part of the IT staff. In other words, the atomization of software makes having a great IT department a competitive advantage.
How about that, maybe IT does matter after all.
Most of the coverage focused on two aspects of software as services: downloading software on demand rather than pre-installing it; and paying for it through advertising rather than retail purchase.
Here are two examples of the coverage. The first is from The Economist:
"At heart, said Mr Ozzie, Web 2.0 is about 'services' (ranging from today's web-based e-mail to tomorrow's web-based word processor) delivered over the web without the need for users to install complicated software on their own computers. With a respectful nod to Google, the world's most popular search engine and Microsoft's arch-rival, Mr Ozzie reminded his colleagues that such services will tend to be free—ie, financed by targeted online advertising as opposed to traditional software-licence fees."
Meanwhile, the New York Times wrote, "if Microsoft shrewdly devises, for example, online versions of its Office products, supported by advertising or subscription fees, it may be a big winner in Internet Round 2."
I respect the Times and love the Economist, but in this case I think they have missed the point, as have most of the other media commenting on the situation. The advertising business model is important to the Microsoft vs. Google story because ads are where Google makes a lot of revenue, and Microsoft wants that money. But the really transformative thing happening in software right now isn't the move to a services business model, it's the move to an atomized development model. The challenge extends far beyond Microsoft. I think most of today's software companies could survive a move to advertising, but the change in development threatens to obsolete almost everything, the same way the graphical interface wiped out most of the DOS software leaders.
The Old Dream is reborn
The idea of component software has been around for a long time. I was first seduced by it in the mid 1990s, when I was at Apple. One of the more interesting projects under development there at the time was something called OpenDoc. In typical Apple fashion, different people had differing visions on what OpenDoc was supposed to become. Some saw it as primarily a compound document architecture -- a better way to mix multiple types of content in a single document. Other folks, including me, wanted it to grow into a more generalized model for creating component software -- breaking big software programs down into a series of modules that could be mixed and matched, like Lego blocks.
For example, if you didn't like the spell-checker built into your word processor, you could buy a new one and plug it in. Don't like the way the program handles footnotes? Plug in a new module. And so on.
The benefit was supposed to be much faster innovation (because pieces of an app could be revised independently), and a market structure that encouraged small developers to build on each others' work. Unfortunately, like many other things Apple did in the 1990s, OpenDoc was never fully implemented and it faded away.
But the dream of components as a better way to build software has remained. Microsoft implemented part of it in its .Net architecture -- companies can develop software using modules that are mixed and matched to create applications rapidly. But the second part of the component dream, an open marketplace for mixing and matching software modules on the fly, never happened on the desktop. So the big burst in software innovation that we wanted to drive never happened either. Until recently.
The Internet is finally bringing the old component software dream to fruition. Many of the best new Internet applications and services look like integrated products, but are actually built up of components. For example, Google Maps consists of a front end that Google created, running on top of a mapping database created by a third party and exposed over the Internet as a service. Google is in turn enabling companies to build more specialized services on top of its mapping engine.
WordPress is my favorite blogging tool in part because of the huge array of third party plug-ins and templates for it. Worried about comment spam? There are several nice plug-ins to fight it. Want a different look for your blog? Just download a new template.
You have to be a bit technical to make it all work, but the learning curve's not steep (hey, I did it). For people who are technical, the explosion of mix and match software and tools on the web is an incredible productivity multiplier. I had lunch recently with a friend who switched from mobile software development to web services in part because he could get so much more done in the web world. To create a new service, he could download an open source version of a baseline service, make a few quick changes to it, add some modules from other developers, and have a prototype product up and running within a couple of weeks. That same sort of development in the traditional software world would have taken a large team of people and months of work.
This feeling of empowerment is enough to make a good programmer giddy. I think that accounts for some of the inflated rhetoric you see around Web 2.0 -- it's the spill-over from a lot of bright people starting to realize just how powerful their new tools really are. I think it's also why less technical analysts have a hard time understanding all the fuss over Web 2.0. The programmers are like carpenters with a shop full of shiny new drills and saws. They're picturing all the cool furniture they can build. But the rest of us say, "so what, it's a drill." We won't get it until more of the furniture is built.
The other critical factor in the rise of this new software paradigm is open source. When I was scheming about OpenDoc, I tried to figure out an elaborate financial model in which developers could be paid a few dollars a copy for each of their modules, with Apple or somebody else acting as an intermediary. It was baroque and probably impractical, but I thought it was essential because I never imagined that people might actually develop software modules for free.
OpenSource gets us past the whole component payment bottleneck. Instead of getting paid for each module, developers share a pool of basic tools that they can use to assemble their own projects quickly, and they focus on just getting paid for those projects. For the people who know how to work this way, the benefits far outweigh the cost of sharing some of your work.
The Rise of the Mammals
Last week I talked with Carl Zetie, a senior analyst at Forrester Research. Carl is one of the brightest analysts I know (I'd point you to his blog rather than his bio, but unfortunately Forrester doesn't let its employees blog publicly).
Carl watches the software industry very closely, and he has a great way of describing the change in paradigm. He sees the world of software development breaking into two camps:
The old paradigm: Large standard applications. This group focuses on the familiar world of APIs and operating systems, and creates standalone, integrated, feature-complete applications.
The new paradigm: Solutions that are built up out of small atomized software modules. APIs don't matter very much here because the modules communicate through metadata. This group changes standards promiscuously (they can be swapped in and out because the changes are buffered by the use of metadata). Carl cited the development tool Eclipse as a great example of this world; the tool itself can be modified and adapted ad hoc.
I think the second group is going to displace the first group, because the second group can innovate so much faster. It'll take years to play itself out, but it's like the mice vs. the dinosaurs, only this time the mice don't need an asteroid to give them a head start.
This situation is very threatening for the established software companies. Almost all of the big ones are based on old-style development, using large teams of programmers to create ponderous software programs with every feature you could imagine. The scale of their products alone has been a huge barrier to entry -- you'd have to duplicate all the features of a PowerPoint or an Illustrator before you could even begin to attack it. Few companies can afford that sort of up-front investment.
But the component paradigm, combined with open source, turns that whole situation on its head. The heavy features of a big software program become a liability -- because the program's so complex, you have to do an incredible amount of testing anytime you add a new feature. The bigger the program becomes, the more testing you have to do. Innovation gets slower and slower. Meanwhile, the component guys can sprint ahead. Their first versions are usually buggy and incomplete, but they improve steadily over time. Because their software is more loosely coupled, they can swap modules without rewriting everything else. If one module turns out to be bad, they just toss it out and use something else.
There are drawbacks to the component approach, of course. For mission-critical applications that require absolute reliability, something composed of modules from various vendors is scary. Support can also be a problem -- when an application breaks, how do you determine which component is at fault? And it's hard (almost laughable at this point) to picture a major desktop application replaced by today's generation of online modules and services. The ones I've seen are far too primitive to displace a mainstream desktop app today.
But I think the potential is there. The online component crowd is systematically working through the problems, and if you project out their progress for five years or so, I think there come a time when their more rapid innovation will outweigh the integration advantages of traditional monolithic software. Components are already winning in online consumer services (that's where most of the Web 2.0 crowd is feeding today), and there are some important enterprise products. Over time I think the component products will eat their way up into enterprise and desktop productivity apps.
In this context, the fuss about software you can download on the fly, and support through advertising, is a sideshow. For many classes of apps it will be faster to use locally cached software for a long time to come, and I don't know if advertising in a productivity application will ever make much sense. But I'm certain that the change in development methodology will reshape the software industry. The real game to watch isn't ad-supported services vs. packaged software, it's atomized development vs. monolithic development.
What does it all mean?
I think this has several very important implications for the industry:
The big established software companies are at risk. The new development paradigm is a horrible challenge for them, because it requires a total change in the way they create and manage their products. Historically, most computing companies haven't survived a transition of this magnitude, and much of the software industry doesn't even seem to be fully aware of what's coming. For example, I recently saw a short note in a very prominent software newsletter, regarding Ruby on Rails (an open source web application framework, one of the darlings of the Web 2.0 crowd). "It's yet another Internet scripting language," the newsletter wrote. "We don't know if it's important. Here are some links so you can decide for yourself."
I guess I have to congratulate them for writing anything, but what they did was kind of like saying, "here's a derringer pistol, Mr. Lincoln. Don't know if it's important or not, but you might want to read about it."
Some software companies are trying to react. I believe the wrenching re-organization that Adobe's putting itself through right now is in part a reaction to this change in the market. The re-org hasn't gotten much coverage -- in part because Adobe hasn't released many details, and in part because the press is obsessed with Google vs. Microsoft. But Adobe has now put Macromedia general managers in charge of most of its business units, displacing a lot of long-time Adobe veterans who are very bitter about being ousted two weeks before Christmas, despite turning in good profits. I've been getting messages from friends at Adobe who have been laid off recently, and all of them say they were pushed aside for Macromedia employees. "It's a reverse acquisition," one friend told me.
I personally think what Adobe's doing is grafting Macromedia's Internet knowledge and reflexes into a company that has been very focused on its successful packaged software franchises. It's going to be a painful integration process, but the fact that Adobe's willing to put itself through this tells you how important the change is. Better to go through agonizing change now than to lose the whole company in five years.
What does Microsoft do? In the old days, Microsoft's own extreme profitability made it straightforward for the company to win a particular market. Microsoft could spend money to bleed the competitor (for example, give away the browser vs. Netscape), while it worked behind the scenes to duplicate and then surpass the competitor's product. But the component software crowd doesn't want to take over Microsoft's revenue stream; it wants to destroy about 90% of it, and then can be very successful living off the remaining 10% or so. To co-opt their tactics, Microsoft would have to destroy most of its own revenue.
Here's a simplified example of what I mean: some of the component companies are developing competitors to the Office apps. A couple of examples are Writely and JotSpot's Tracker. Microsoft could fight them by trimming down Word and Excel into lightweight frameworks and inviting developers to extend them. The trouble is that you can't charge a traditional Word or Excel price for a basic framework; if you do, competing frameworks will beat you on price. And if you enable third parties to make the extensions, then they'll get any revenue that comes from extensions. I don't see how Microsoft could sell enough advertising on a Word service to make up the couple of hundred dollars in gross margin per user that it gets today from Office (and that it gets to collect again every time it upgrades Office).
The Ozzie memo seems to suggest that Microsoft will try to integrate across its products, to make them complete and interoperable in ways that will be very hard for the component crowd to copy. But that adds even more complexity to Microsoft's development process, which is already becoming famous for slowness. If you gather all the dinosaurs together into a herd, that doesn't stop the mice from eating their eggs.
I wonder if senior management at Microsoft sees the scenario this starkly. If so, a logical approach might be to make an all-out push to displace the Google search engine and take over all of Google's advertising business, to offset the coming loss of applications revenue. Will Microsoft suddenly offer to install free WiFi in every city in the world? Don't laugh; historically, when Microsoft felt truly threatened it was willing to take radical action. Years ago the standard assumption was that Gates and Ballmer utterly controlled Microsoft -- they held enough of the company's stock that they could ignore the other shareholders if they had to. I'm not sure if that's still true. Together the stock holdings of Gates and Ballmer have dropped to about 13% of the company. Microsoft execs hold another 14%, and Paul Allen has about 1%. Taken together, that's 28%. Is that enough to let company management make radical moves, even at the expense of short-term profits? I don't know. But I wouldn't bet against it.
The rebirth of IT? The other interesting potential impact was pointed out to me by a co-worker at Rubicon Consulting, Bruce La Fetra. Just as new software companies can become more efficient by working in the component world, companies can gain competitive advantage by aggressively using the new online services and open source components for their own in-house development. But doing so requires careful integration and support on the part of the IT staff. In other words, the atomization of software makes having a great IT department a competitive advantage.
How about that, maybe IT does matter after all.
Microsoft and the quest for the low-cost smartphone
Posted by
Andy
at
8:14 PM
The Register picked up an article from DigiTimes reporting that Microsoft's seeking bids to create a sub-$300 Windows Mobile smartphone.
At first the article made no sense to me because it's easy today to create a Windows Mobile or Palm Powered smartphone for less than $300. You use a chipset from TI, which combines the radio circuitry and processor in the same part. You can't doll up the device with a keyboard like the Treo, so you end up with a basic flip phone or candybar like the ones sold by the good people at GSPDA and Qool Labs.*
This works only with GSM phones (Cingular and TMobile in the US); if you want CDMA (Sprint or Verizon) your hardware costs more. And the costs for 3G phones are a lot higher.
I have a feeling it's lower-cost 3G smartphones that Microsoft is actually after. Most of the operators don't want to take on smartphones with anything less than 3G even today, and if Microsoft's looking ahead to future devices there's no reason to plan for anything other than 3G.
A 3G smartphone at $300 would sell for about $99-$150 after the operator subsidy, so Microsoft's trying to get its future smartphones down closer to mainstream phone price points.
What no one seems to be asking is why it's so important to do this. The assumption most people make is that by reaching "mainstream" price points you'll automatically get much higher sales. That's what the Register seems to believe:
"Symbian remains the world's most successful mobile operating system, almost entirely because Nokia has used it not only for high-end smart-phones but to drive mid-range feature-phones too."
Okay. But that happens only because Nokia's willing (at least for the moment) to eat several dollars per unit to put Symbian into phones bought by feature phone buyers: people who won't pay extra for an OS, and in fact don't even know Symbian is in their phones. Symbian is basically a big Nokia charity at the moment. Most other vendors are unwilling to subsidize Symbian like this, which is why Symbian sales outside Japan are almost totally dominated by Nokia.
Will Microsoft be able to get vendors to push its products the way Nokia does Symbian? Maybe with enough financial incentives. But I think the underlying problem is that the OS isn't adding enough value to drive large numbers of people to buy it, at any price. When we researched mobile device customers at PalmSource, we found lots of people who were willing to pay extra for devices that met their particular needs. Fix the product and the price problem will take care of itself.
Michael Gartenberg says he was very impressed by the preview he just got of the next version of Windows Mobile. I tend to trust Michael's judgment, so maybe there's hope.
__________
*If only some US operator had the wisdom to carry their products. Sigh.
At first the article made no sense to me because it's easy today to create a Windows Mobile or Palm Powered smartphone for less than $300. You use a chipset from TI, which combines the radio circuitry and processor in the same part. You can't doll up the device with a keyboard like the Treo, so you end up with a basic flip phone or candybar like the ones sold by the good people at GSPDA and Qool Labs.*
This works only with GSM phones (Cingular and TMobile in the US); if you want CDMA (Sprint or Verizon) your hardware costs more. And the costs for 3G phones are a lot higher.
I have a feeling it's lower-cost 3G smartphones that Microsoft is actually after. Most of the operators don't want to take on smartphones with anything less than 3G even today, and if Microsoft's looking ahead to future devices there's no reason to plan for anything other than 3G.
A 3G smartphone at $300 would sell for about $99-$150 after the operator subsidy, so Microsoft's trying to get its future smartphones down closer to mainstream phone price points.
What no one seems to be asking is why it's so important to do this. The assumption most people make is that by reaching "mainstream" price points you'll automatically get much higher sales. That's what the Register seems to believe:
"Symbian remains the world's most successful mobile operating system, almost entirely because Nokia has used it not only for high-end smart-phones but to drive mid-range feature-phones too."
Okay. But that happens only because Nokia's willing (at least for the moment) to eat several dollars per unit to put Symbian into phones bought by feature phone buyers: people who won't pay extra for an OS, and in fact don't even know Symbian is in their phones. Symbian is basically a big Nokia charity at the moment. Most other vendors are unwilling to subsidize Symbian like this, which is why Symbian sales outside Japan are almost totally dominated by Nokia.
Will Microsoft be able to get vendors to push its products the way Nokia does Symbian? Maybe with enough financial incentives. But I think the underlying problem is that the OS isn't adding enough value to drive large numbers of people to buy it, at any price. When we researched mobile device customers at PalmSource, we found lots of people who were willing to pay extra for devices that met their particular needs. Fix the product and the price problem will take care of itself.
Michael Gartenberg says he was very impressed by the preview he just got of the next version of Windows Mobile. I tend to trust Michael's judgment, so maybe there's hope.
__________
*If only some US operator had the wisdom to carry their products. Sigh.
Google to sell thin client computers?
Posted by
Andy
at
2:01 PM
There was an interesting little tidbit buried deep in a recent NY Times story on Microsoft and software as a service:
"For the last few months, Google has talked with Wyse Technology, a maker of so-called thin-client computers (without hard drives). The discussions are focused on a $200 Google-branded machine that would likely be marketed in cooperation with telecommunications companies in markets like China and India, where home PC's are less common, said John Kish, chief executive of Wyse."
Google sells servers, and there has been speculation about client hardware since at least early this year. But this is the first firm report I've seen.
I'm not a big fan of thin clients; local storage is very cheap, and much higher bandwidth than even the fastest network. But I presume the most significant thinness in a Google client would be the absence of a Microsoft operating system.
Hey, Google, while you're at it you need to do a mobile client too.
"For the last few months, Google has talked with Wyse Technology, a maker of so-called thin-client computers (without hard drives). The discussions are focused on a $200 Google-branded machine that would likely be marketed in cooperation with telecommunications companies in markets like China and India, where home PC's are less common, said John Kish, chief executive of Wyse."
Google sells servers, and there has been speculation about client hardware since at least early this year. But this is the first firm report I've seen.
I'm not a big fan of thin clients; local storage is very cheap, and much higher bandwidth than even the fastest network. But I presume the most significant thinness in a Google client would be the absence of a Microsoft operating system.
Hey, Google, while you're at it you need to do a mobile client too.
Revisionist history
Posted by
Andy
at
12:13 AM
I'm working on a posting about software as a service. During my research, I reviewed Microsoft's recent executive memos on the subject. As always happens when I read Microsoft's stuff, I was struck by the loving craftsmanship that goes into those documents. Although these are supposedly private internal memos, I believe they're written with the expectation that they will leak. Microsoft slips little bits of revisionist history into the memos. Since the history notes are incidental to the main message of the memo, most people don't even think to question them. It's very effective PR. Here are two examples. Let's watch the message masters at work:
Ray Ozzie wrote: "In 1990, there was actually a question about whether the graphical user interface had merit. Apple amongst others valiantly tried to convince the market of the GUI's broad benefits, but the non-GUI Lotus 1-2-3 and WordPerfect had significant momentum. But Microsoft recognized the GUI's transformative potential, and committed the organization to pursuit of the dream – through investment in applications, platform and tools."
Reality: By 1990 everyone who understood computers, and I mean everyone, agreed that the graphical interface had merit. Everyone also agreed that Microsoft's implementation of it sucked. Meanwhile Microsoft had tried and failed to displace Lotus 1-2-3 and WordPerfect from leadership in the DOS world because they were established standards. One thing Microsoft recognized about the GUI was its transformative potential to break these software standards and replace them with its own Word and Excel.
And by the way, congratulations to Microsoft for figuring that out. If Lotus and WordPerfect had been more attentive to the shift to GUIs, I don't think Microsoft could have displaced them. That's an important lesson for today's software companies looking at the new development paradigm on the web.
Bill Gates wrote: "Microsoft has always had to anticipate changes in the software business and seize the opportunity to lead."
Wow. Exactly which changes did Microsoft anticipate and lead, as opposed to respond to and co-opt?
But seize is the right word. I like that one a lot.
Ray Ozzie wrote: "In 1990, there was actually a question about whether the graphical user interface had merit. Apple amongst others valiantly tried to convince the market of the GUI's broad benefits, but the non-GUI Lotus 1-2-3 and WordPerfect had significant momentum. But Microsoft recognized the GUI's transformative potential, and committed the organization to pursuit of the dream – through investment in applications, platform and tools."
Reality: By 1990 everyone who understood computers, and I mean everyone, agreed that the graphical interface had merit. Everyone also agreed that Microsoft's implementation of it sucked. Meanwhile Microsoft had tried and failed to displace Lotus 1-2-3 and WordPerfect from leadership in the DOS world because they were established standards. One thing Microsoft recognized about the GUI was its transformative potential to break these software standards and replace them with its own Word and Excel.
And by the way, congratulations to Microsoft for figuring that out. If Lotus and WordPerfect had been more attentive to the shift to GUIs, I don't think Microsoft could have displaced them. That's an important lesson for today's software companies looking at the new development paradigm on the web.
Bill Gates wrote: "Microsoft has always had to anticipate changes in the software business and seize the opportunity to lead."
Wow. Exactly which changes did Microsoft anticipate and lead, as opposed to respond to and co-opt?
But seize is the right word. I like that one a lot.
Bring on the Singularity!
Posted by
Andy
at
6:13 PM
It's philosophy time. If you're looking for comments on the latest smartphone, you can safely skip this post.
One of the nice side effects of doing a job search in Silicon Valley is that you get to step back and take a broader view of the industry. A friend calls being laid off the "modern sabbatical," because this is the only opportunity most of us have for multi-month time off from work.
It's not really a sabbatical, of course. Unless you're supremely self-confident, it's a time of uncertainty. And instead of going on vacation you spend most of your days networking; having breakfast and lunch and coffee with all your old co-workers and other contacts. (Speaking of coffee shops, the drinking chocolate thing from Starbucks is obscenely expensive but tastes really good.)
In addition to the exotic drinks, the networking itself is very interesting because you get to hear what everyone else is working on. You get a glimpse of the big picture, and that's what I want to talk about tonight.
To work in Silicon Valley today is to be suspended between euphoria and despair.
The euphoria comes from all the cool opportunities that are unfolding around us. The despair comes from the fear that it's all going to dry up and blow away in the next ten years.
Let's talk about the euphoria first. The number of interesting, potentially useful business ideas floating around in the Valley right now is remarkable. As I went around brainstorming with friends, it seemed like every other person had a cool new idea or was talking with someone who had a promising project. Unlike the bubble years, a lot of these ideas seem (to me, at least) to be much more grounded in reality, and to have a better chance of making money.
The success of Google and Yahoo has put a vibrant economic engine at the center of Silicon Valley, and the competition between them and Microsoft is creating a strong market for new startups. In the late 1990s, the VCs loved funding networking startups because if they were any good, Cisco would buy them before they even went public. It feels like the same thing is happening today with Internet startups, except this time there are several companies competing to do the buying.
And then there's Apple, which is reaching a scale where it can make significant investments in…something. I don't know what. I personally think it's going to be the next great consumer electronics company, but we'll see. What I'm pretty sure of is that if you give Steve Jobs this much money and momentum, he's not going to sit back on his haunches and be satisfied.
The overall feeling you get is one of imminence, that great things are in the process of happening, even if you can't always see exactly what they are. The tech industry is so complex, and technologies are changing so quickly, that I don't think it's possible for any one person to understand where it's all going. But it feels like something big is just over the horizon, something that'll reset a lot of expectations and create a lot of new opportunities.
Perhaps we're all just breathing our own exhaust fumes, but the glimpses of that something that I got during my search seem a lot more genuine, a lot more practical, than they did during the bubble. It's an exciting time to be in Silicon Valley.
Then there's the despair. In two words, China and India.
I don't know anyone in Silicon Valley who dislikes Chinese or Indian people as a group (a lot of us are Chinese and/or Indian). But there's a widespread fear of the salary levels in China and India. I saw this first-hand at PalmSource, when we acquired a Chinese company that makes Linux software. Their engineers were…pretty darned good. Not as experienced on average as engineers in Silicon Valley, but bright and competent and very energetic. Nice people. And very happy to work for one tenth the price of an American engineer.
That's right, for the same price as a single engineer in Bay Area, you can hire ten of them in Nanjing, China.
Now, salary levels are rising in China, so the gap will narrow. I wouldn't be surprised if it has already changed to maybe eight to one. But even if it went to five to one, there's no way the average engineer in Silicon Valley can be five times as productive as the average engineer in China. No freaking way, folks. Not possible.
Today, there's still a skills and experience gap between the US and China in software. Most software managers over there don't know how to organize and manage major development projects, or how to do quality control; and their user interface work is pathetic. But that can all be learned. If you look out a decade or so, the trends are wonderful for people in China and India, and I'm honestly very happy for them. They deserve a chance to make better lives for themselves. But those same trends are very scary for Silicon Valley.
Embracing Asia
One way for today's tech companies to deal with this is to co-opt the emerging economies, to move work to the low-cost countries and get those cost advantages for themselves. There are some great examples of companies that have already done this. One is Logitech.
Did you say Logitech? They make…what, mice and keyboards? Most of Silicon Valley ignores them because their markets aren't sexy and they don't publicize themselves a lot. But when I look at Logitech, here's what I see: A company owned by the Swiss and headquartered in Silicon Valley (two of the highest-cost places on Earth) that makes $50 commodity hardware and sells it at a very nice profit.
Almost no one in Silicon Valley knows how to do that. In fact, the conventional wisdom is that it's impossible. And yet Logitech grows steadily, year after year.
One of their secrets is that they got into China long ago, years before it was fashionable. They don't outsource manufacturing to China, they have their own manufacturing located there. This lets Logitech go head to head with companies that want to fight them on price.
Another example is Nokia. The image of Nokia phones is that they're high-end and kind of ritzy, but when I was studying the mobile industry at Palm, I was surprised to see how much of Nokia's volume came at the low end of their product lines. Like Wal-Mart does in retailing, Nokia leverages its huge global manufacturing volumes to make phones cheaper than anyone else. It's the sales leader and also the low-cost producer, at least among the name brands.
So there's hope that Silicon Valley's tech companies, and their senior management, may survive by embracing the cost advantages of operating in China and India. But that's little comfort to engineers being told that their jobs are moving to Bangalore. And it doesn't help the small startups that don't have the scale to work across continents.
So here we remain, suspended between the euphoria of today and a deep-seated fear of the future.
What do we do about it?
My usual rule when facing any long-term challenge is that you need to change the rules. If a competitor's targeting the place where you're standing, move someplace else. If your economic model is becoming obsolete, find a new model. In the case of people living and working in Silicon Valley, I think the opportunity is to embrace and consciously accelerate the rate of change.
No tech company in the "developed" world, let alone Silicon Valley, is going to win an engineering cost battle. But if we can find ways to create new uses for technology, new businesses and new solutions, faster than anyone else, then I think Silicon Valley and places like it can survive and prosper. What sort of new businesses? I'll give you one example. The big Internet companies are in the process of trying to restructure the advertising and content distribution industries. Instead of slowing that process down and protecting the old established firms, the US and local governments ought to get out of the way and let Darwin operate. If the old established companies can't even compete with Google, what chance is there that they could compete with the low-cost tech empires being built in Asia? (By the way, that's one reason why I'm so pissed off at San Jose for not inviting Google to bid on installing WiFi throughout the city.)
I'm going to discuss some other business opportunities in future posts.
The overall theme I'm suggesting is to make change happen so quickly that cost isn't the basis for competitive advantage in technology. Instead, flexibility and adaptability is. The Valley can't be the cheapest, but perhaps the people in it can be the most nimble and creative. I think that's the best chance for survival.
There are problems with this vision. In a world that's going virtual, how does any geographic region create a specialization of any sort? I think a lot of the answer is culture. I've seen the reluctance of tech companies in many parts of the world to embrace radically new ideas until they're proven. To survive, Silicon Valley and the rest of the current tech industry needs to turn that on its head and consciously encourage radical experimentation. We're doing fairly well at that in software, but are miserably bad at it in hardware.
When Chris Dunphy and I were working together at Palm, we sometimes talked about the idea of the "Singularity." That's the point in the future at which technological change happens so fast that the world is altered in fundamental ways we can't anticipate today. I think the concept was best explained by Vernor Vinge, in an essay you can read here. Professor Vinge believes that if you extrapolate from today's trends, the Singularity (it's always capitalized) is a lot closer than you think.
The idea of the Singularity has been screwing up science fiction authors for about a decade now, because they have trouble extrapolating what the world will be like in 50 years, let alone 500. But in the rest of the world the idea doesn't have much traction. There's a lot of millennialist rhetoric associated with the idea that feels overblown (humans being transcended, etc), and some very prominent people see the Singularity as a grave threat. Maybe the whole thing's just a bunch of nerdy intellectual wankery. There's a pretty good pro and con discussion in Wikipedia here (and I guarantee you won't find that entry in the Encyclopedia Brittanica).
Chris and I used to joke to each other that the real mission of Palm was to make the Singularity happen sooner, by giving people as much computing power in their pockets as possible. We were wise enough not to tell anyone else, because we would have been viewed as kooks.
But maybe we had it wrong. I'm starting to believe that the right thing for Silicon Valley may be to consciously embrace the Singularity. Bring on the chaos, baby! The faster we can make the world change, the greater the chance we'll be able to pay our mortgages.
One of the nice side effects of doing a job search in Silicon Valley is that you get to step back and take a broader view of the industry. A friend calls being laid off the "modern sabbatical," because this is the only opportunity most of us have for multi-month time off from work.
It's not really a sabbatical, of course. Unless you're supremely self-confident, it's a time of uncertainty. And instead of going on vacation you spend most of your days networking; having breakfast and lunch and coffee with all your old co-workers and other contacts. (Speaking of coffee shops, the drinking chocolate thing from Starbucks is obscenely expensive but tastes really good.)
In addition to the exotic drinks, the networking itself is very interesting because you get to hear what everyone else is working on. You get a glimpse of the big picture, and that's what I want to talk about tonight.
To work in Silicon Valley today is to be suspended between euphoria and despair.
The euphoria comes from all the cool opportunities that are unfolding around us. The despair comes from the fear that it's all going to dry up and blow away in the next ten years.
Let's talk about the euphoria first. The number of interesting, potentially useful business ideas floating around in the Valley right now is remarkable. As I went around brainstorming with friends, it seemed like every other person had a cool new idea or was talking with someone who had a promising project. Unlike the bubble years, a lot of these ideas seem (to me, at least) to be much more grounded in reality, and to have a better chance of making money.
The success of Google and Yahoo has put a vibrant economic engine at the center of Silicon Valley, and the competition between them and Microsoft is creating a strong market for new startups. In the late 1990s, the VCs loved funding networking startups because if they were any good, Cisco would buy them before they even went public. It feels like the same thing is happening today with Internet startups, except this time there are several companies competing to do the buying.
And then there's Apple, which is reaching a scale where it can make significant investments in…something. I don't know what. I personally think it's going to be the next great consumer electronics company, but we'll see. What I'm pretty sure of is that if you give Steve Jobs this much money and momentum, he's not going to sit back on his haunches and be satisfied.
The overall feeling you get is one of imminence, that great things are in the process of happening, even if you can't always see exactly what they are. The tech industry is so complex, and technologies are changing so quickly, that I don't think it's possible for any one person to understand where it's all going. But it feels like something big is just over the horizon, something that'll reset a lot of expectations and create a lot of new opportunities.
Perhaps we're all just breathing our own exhaust fumes, but the glimpses of that something that I got during my search seem a lot more genuine, a lot more practical, than they did during the bubble. It's an exciting time to be in Silicon Valley.
Then there's the despair. In two words, China and India.
I don't know anyone in Silicon Valley who dislikes Chinese or Indian people as a group (a lot of us are Chinese and/or Indian). But there's a widespread fear of the salary levels in China and India. I saw this first-hand at PalmSource, when we acquired a Chinese company that makes Linux software. Their engineers were…pretty darned good. Not as experienced on average as engineers in Silicon Valley, but bright and competent and very energetic. Nice people. And very happy to work for one tenth the price of an American engineer.
That's right, for the same price as a single engineer in Bay Area, you can hire ten of them in Nanjing, China.
Now, salary levels are rising in China, so the gap will narrow. I wouldn't be surprised if it has already changed to maybe eight to one. But even if it went to five to one, there's no way the average engineer in Silicon Valley can be five times as productive as the average engineer in China. No freaking way, folks. Not possible.
Today, there's still a skills and experience gap between the US and China in software. Most software managers over there don't know how to organize and manage major development projects, or how to do quality control; and their user interface work is pathetic. But that can all be learned. If you look out a decade or so, the trends are wonderful for people in China and India, and I'm honestly very happy for them. They deserve a chance to make better lives for themselves. But those same trends are very scary for Silicon Valley.
Embracing Asia
One way for today's tech companies to deal with this is to co-opt the emerging economies, to move work to the low-cost countries and get those cost advantages for themselves. There are some great examples of companies that have already done this. One is Logitech.
Did you say Logitech? They make…what, mice and keyboards? Most of Silicon Valley ignores them because their markets aren't sexy and they don't publicize themselves a lot. But when I look at Logitech, here's what I see: A company owned by the Swiss and headquartered in Silicon Valley (two of the highest-cost places on Earth) that makes $50 commodity hardware and sells it at a very nice profit.
Almost no one in Silicon Valley knows how to do that. In fact, the conventional wisdom is that it's impossible. And yet Logitech grows steadily, year after year.
One of their secrets is that they got into China long ago, years before it was fashionable. They don't outsource manufacturing to China, they have their own manufacturing located there. This lets Logitech go head to head with companies that want to fight them on price.
Another example is Nokia. The image of Nokia phones is that they're high-end and kind of ritzy, but when I was studying the mobile industry at Palm, I was surprised to see how much of Nokia's volume came at the low end of their product lines. Like Wal-Mart does in retailing, Nokia leverages its huge global manufacturing volumes to make phones cheaper than anyone else. It's the sales leader and also the low-cost producer, at least among the name brands.
So there's hope that Silicon Valley's tech companies, and their senior management, may survive by embracing the cost advantages of operating in China and India. But that's little comfort to engineers being told that their jobs are moving to Bangalore. And it doesn't help the small startups that don't have the scale to work across continents.
So here we remain, suspended between the euphoria of today and a deep-seated fear of the future.
What do we do about it?
My usual rule when facing any long-term challenge is that you need to change the rules. If a competitor's targeting the place where you're standing, move someplace else. If your economic model is becoming obsolete, find a new model. In the case of people living and working in Silicon Valley, I think the opportunity is to embrace and consciously accelerate the rate of change.
No tech company in the "developed" world, let alone Silicon Valley, is going to win an engineering cost battle. But if we can find ways to create new uses for technology, new businesses and new solutions, faster than anyone else, then I think Silicon Valley and places like it can survive and prosper. What sort of new businesses? I'll give you one example. The big Internet companies are in the process of trying to restructure the advertising and content distribution industries. Instead of slowing that process down and protecting the old established firms, the US and local governments ought to get out of the way and let Darwin operate. If the old established companies can't even compete with Google, what chance is there that they could compete with the low-cost tech empires being built in Asia? (By the way, that's one reason why I'm so pissed off at San Jose for not inviting Google to bid on installing WiFi throughout the city.)
I'm going to discuss some other business opportunities in future posts.
The overall theme I'm suggesting is to make change happen so quickly that cost isn't the basis for competitive advantage in technology. Instead, flexibility and adaptability is. The Valley can't be the cheapest, but perhaps the people in it can be the most nimble and creative. I think that's the best chance for survival.
There are problems with this vision. In a world that's going virtual, how does any geographic region create a specialization of any sort? I think a lot of the answer is culture. I've seen the reluctance of tech companies in many parts of the world to embrace radically new ideas until they're proven. To survive, Silicon Valley and the rest of the current tech industry needs to turn that on its head and consciously encourage radical experimentation. We're doing fairly well at that in software, but are miserably bad at it in hardware.
When Chris Dunphy and I were working together at Palm, we sometimes talked about the idea of the "Singularity." That's the point in the future at which technological change happens so fast that the world is altered in fundamental ways we can't anticipate today. I think the concept was best explained by Vernor Vinge, in an essay you can read here. Professor Vinge believes that if you extrapolate from today's trends, the Singularity (it's always capitalized) is a lot closer than you think.
The idea of the Singularity has been screwing up science fiction authors for about a decade now, because they have trouble extrapolating what the world will be like in 50 years, let alone 500. But in the rest of the world the idea doesn't have much traction. There's a lot of millennialist rhetoric associated with the idea that feels overblown (humans being transcended, etc), and some very prominent people see the Singularity as a grave threat. Maybe the whole thing's just a bunch of nerdy intellectual wankery. There's a pretty good pro and con discussion in Wikipedia here (and I guarantee you won't find that entry in the Encyclopedia Brittanica).
Chris and I used to joke to each other that the real mission of Palm was to make the Singularity happen sooner, by giving people as much computing power in their pockets as possible. We were wise enough not to tell anyone else, because we would have been viewed as kooks.
But maybe we had it wrong. I'm starting to believe that the right thing for Silicon Valley may be to consciously embrace the Singularity. Bring on the chaos, baby! The faster we can make the world change, the greater the chance we'll be able to pay our mortgages.
Look what's number one
Posted by
Andy
at
8:55 PM
The image above was sent to me today by a former PalmSource colleague. Yes, that's a list of Amazon's best-selling consumer electronics products.
And yes, that's the Fossil Palm OS watch at #1, outselling the iPod Nano.
The Fossil saga is one of the saddest stories in the licensing of Palm OS. Fossil had terrible manufacturing problems with the first generation product, and so the company became cautious about the market. I think the underwhelming performance of its Microsoft Spot watches didn't help either. When the second generation Palm OS product wasn't an immediate runaway success, Fossil backed away from the market entirely. They killed some achingly great products that were in development, stuff that I think would have gone over very well. And now here's that original Palm OS watch on top of the sales chart.
You don't want to read too much into this; the Fossil watch is on closeout, and ridiculously discounted. And being on the top of a sales list at one moment in time is not the same as being a perennial best-seller.
But still…this is the holiday buying season, and that's a pretty remarkable sales ranking. It makes me wonder, was the Fossil product a dumb idea? Or was it just the wrong price point, at the wrong time?
It's very fashionable these days to dismiss the mobile data market, or to think of it as just a phone phenomenon. But we're very, very, very early in the evolution of mobile data -- equivalent to where PCs were when VisiCalc came out. We haven't even come close to exploring the price points, form factors, and software capabilities that mobile devices will develop in the next 10 years. A lot of experimental products are going to fail, but some are going to be successes. Anyone who says the market is played out, or that a particular form factor will “never” succeed, just doesn't understand the big picture.
I don't know that an electronic calendar watch is ever going to be on everyone's wrist, but wait another five years and you'll be able to make a pretty powerful wristwatch-sized data device profitably for $50. And I'll bet you somebody's going to do something very cool with it.
(PS: Just so no one thinks I'm trying to pull a fast one, I should let you know that the Amazon list changes hourly, and the sales ranking of the Fossil watch seems to jump around a lot. But the fact that it's even in the top 25 impresses me.)
Be nice to the wiki
Posted by
Andy
at
5:51 PM
Is Wikipedia wonderful or awful? I’m going to argue that it’s mostly irrelevant. But first some background…
In the last month and a half there has been a kerfuffle between Tim O’Reilly and Nicholas Carr regarding Wikipedia. It started when O’Reilly posted a very interesting essay in which he laid out his definition for Web 2.0. It’s a long and pretty ambitious document with a lot of good ideas in it. If you’re interested in Web 2.0 it’s an important read.
It’s also kind of amusing because O’Reilly managed to give shout-outs to an incredible number of startups and bloggers. I haven’t seen that much name-dropping since Joan Rivers guest-hosted the Tonight Show.
Anyway, O’Reilly cited Wikipedia as an example of a great Web 2.0 venture. Then another very smart guy, Nicholas Carr, posted an essay called The Amorality of Web 2.0. Carr is positioning himself as kind of a gadfly of the tech industry, and in this essay he called out O’Reilly and a lot of other Web boosters, criticizing the Web 2.0 campaign in general, and specifically citing Wikipedia. He quoted some very bad Wikipedia articles and used them to argue that Web 2.0 enthusiasts are ideologues more than practical business people.
Like O’Reilly’s essay, Carr’s is a very good read, and you should check it out.
Since then I’ve seen both the O’Reilly and Carr essays quoted in a lot of places, usually to either praise or damn Wikipedia, and by extension all wikis. And I think that’s a shame.
To me, Wikipedia is a fun experiment but a fairly uninteresting use of wiki technology. The world already has a number of well-written encyclopedias, and we don’t need to reinvent that particular wheel. Where I think Wikipedia shines is in areas a traditional encyclopedia wouldn’t cover. For example, it’s a great place to find definitions of technical terms.
To me, that’s the central usefulness of a wiki -- it lets people with content expertise capture and share knowledge that hasn’t ever been collected before. By their nature, these wikis are interesting only to narrow groups of enthusiasts. But if you put together enough narrow interests, you’ll eventually have something for almost everyone.
Let me give you three examples:
First, the online documentation for WordPress. As part of my experiment in blogging, I’ve been playing with several different blog tools. I was very nervous about WordPress because it’s freeware, and as a newbie I was worried about accidentally doing something wrong. But when I installed it, and worked through the inevitable snags, I discovered that WordPress has some of the best online documentation I’ve seen. It’s a stunning contrast to for-pay products like Microsoft Frontpage, which unbelievably doesn’t even come with a manual.
Why is the WordPress documentation so thorough? Well, they started with a wiki, and gradually systematized it into an online suite of documents called the WordPress Codex. I found WordPress easier to install and work with than a lot of paid-for programs, and a major reason was the wiki-derived documentation.
Second example: the Pacific Bulb Society. Several years ago a nasty fight broke out on the e-mail discussion list of the International Bulb Society, a traditional old-style group of enthusiast gardeners. It was the sort of interpersonal nastiness that sometimes happens on mail lists. A group of people got so angry that they went off by themselves and founded the Pacific Bulb Society. And they set up a wiki.
In just a few years, the enthusiasts feeding that wiki have created what’s probably the most comprehensive online collection of photos and information about bulbs anywhere. In a lot of ways, it’s better than any reference book.
Third example: the Palm OS Expert Guides, a collection of 50 written guides to Palm OS software. I helped get the Expert Guides going, so I saw this one from the inside. PalmSource didn’t have the budget or vertical market expertise to document the software available for Palm OS, but a group of volunteers agreed to do it. The Expert Guides are not technically a wiki, but the spirit is the same.
Scratch around on the Web and you’ll find volunteer-collected information and databases on all sorts of obscure topics. To me, this is the real strength of the wiki process: enthusiasts collecting information that simply hadn’t been collected before, because it wasn’t economical to do so. As wiki software and other tools improve, this information gets more complete and more accessible all the time.
I think that’s pretty exciting.
In this context, the whole Wikipedia vs. Encyclopedia Britannica debate is kind of a sideshow. That’s not where the real action is.
Nichloas Carr does raise a legitimate concern that free content on the Web is going to put paid content providers out of business. But Wikipedia didn’t kill printed encyclopedias -- web search engines did that years ago. And I don’t think free content was the cause of death; the problem was that the encyclopedias weren’t very encyclopedic compared to the avalanche of information available on the Web. And the encyclopedia vendors acted more like carriers than creators. But that’s a subject for a different essay…
In the last month and a half there has been a kerfuffle between Tim O’Reilly and Nicholas Carr regarding Wikipedia. It started when O’Reilly posted a very interesting essay in which he laid out his definition for Web 2.0. It’s a long and pretty ambitious document with a lot of good ideas in it. If you’re interested in Web 2.0 it’s an important read.
It’s also kind of amusing because O’Reilly managed to give shout-outs to an incredible number of startups and bloggers. I haven’t seen that much name-dropping since Joan Rivers guest-hosted the Tonight Show.
Anyway, O’Reilly cited Wikipedia as an example of a great Web 2.0 venture. Then another very smart guy, Nicholas Carr, posted an essay called The Amorality of Web 2.0. Carr is positioning himself as kind of a gadfly of the tech industry, and in this essay he called out O’Reilly and a lot of other Web boosters, criticizing the Web 2.0 campaign in general, and specifically citing Wikipedia. He quoted some very bad Wikipedia articles and used them to argue that Web 2.0 enthusiasts are ideologues more than practical business people.
Like O’Reilly’s essay, Carr’s is a very good read, and you should check it out.
Since then I’ve seen both the O’Reilly and Carr essays quoted in a lot of places, usually to either praise or damn Wikipedia, and by extension all wikis. And I think that’s a shame.
To me, Wikipedia is a fun experiment but a fairly uninteresting use of wiki technology. The world already has a number of well-written encyclopedias, and we don’t need to reinvent that particular wheel. Where I think Wikipedia shines is in areas a traditional encyclopedia wouldn’t cover. For example, it’s a great place to find definitions of technical terms.
To me, that’s the central usefulness of a wiki -- it lets people with content expertise capture and share knowledge that hasn’t ever been collected before. By their nature, these wikis are interesting only to narrow groups of enthusiasts. But if you put together enough narrow interests, you’ll eventually have something for almost everyone.
Let me give you three examples:
First, the online documentation for WordPress. As part of my experiment in blogging, I’ve been playing with several different blog tools. I was very nervous about WordPress because it’s freeware, and as a newbie I was worried about accidentally doing something wrong. But when I installed it, and worked through the inevitable snags, I discovered that WordPress has some of the best online documentation I’ve seen. It’s a stunning contrast to for-pay products like Microsoft Frontpage, which unbelievably doesn’t even come with a manual.
Why is the WordPress documentation so thorough? Well, they started with a wiki, and gradually systematized it into an online suite of documents called the WordPress Codex. I found WordPress easier to install and work with than a lot of paid-for programs, and a major reason was the wiki-derived documentation.
Second example: the Pacific Bulb Society. Several years ago a nasty fight broke out on the e-mail discussion list of the International Bulb Society, a traditional old-style group of enthusiast gardeners. It was the sort of interpersonal nastiness that sometimes happens on mail lists. A group of people got so angry that they went off by themselves and founded the Pacific Bulb Society. And they set up a wiki.
In just a few years, the enthusiasts feeding that wiki have created what’s probably the most comprehensive online collection of photos and information about bulbs anywhere. In a lot of ways, it’s better than any reference book.
Third example: the Palm OS Expert Guides, a collection of 50 written guides to Palm OS software. I helped get the Expert Guides going, so I saw this one from the inside. PalmSource didn’t have the budget or vertical market expertise to document the software available for Palm OS, but a group of volunteers agreed to do it. The Expert Guides are not technically a wiki, but the spirit is the same.
Scratch around on the Web and you’ll find volunteer-collected information and databases on all sorts of obscure topics. To me, this is the real strength of the wiki process: enthusiasts collecting information that simply hadn’t been collected before, because it wasn’t economical to do so. As wiki software and other tools improve, this information gets more complete and more accessible all the time.
I think that’s pretty exciting.
In this context, the whole Wikipedia vs. Encyclopedia Britannica debate is kind of a sideshow. That’s not where the real action is.
Nichloas Carr does raise a legitimate concern that free content on the Web is going to put paid content providers out of business. But Wikipedia didn’t kill printed encyclopedias -- web search engines did that years ago. And I don’t think free content was the cause of death; the problem was that the encyclopedias weren’t very encyclopedic compared to the avalanche of information available on the Web. And the encyclopedia vendors acted more like carriers than creators. But that’s a subject for a different essay…
Motorola Rokr: Instant Failure
Posted by
Andy
at
11:36 PM
I did an online search today for the words “Rokr” and “failure” together in the same article. There were 49,700 hits.
I don’t want to pick on Motorola, but the speed at which its two-month-old product was labeled a failure is fascinating -- and a great object lesson for companies that want to play in the mobile space. Here are some thoughts.
First off, it’s hard to be certain that the Rokr actually is a failure, since there are no official industry stats on phone sales by model. But the circumstantial evidence is pretty damning. Most importantly, Cingular cut the phone’s price by $100 in early November. I can tell you from personal experience that no US hardware company ever introduces a device expecting to cut its price just a couple of months after launch. It causes too many logistical problems, and pisses off your early buyers.
Also, several reporters have noticed that Motorola and Apple both gave very telling comments about the product. Steve Jobs called it “a way to put our toe in the water,” which is about as tepid an endorsement as you can get. Ed Zander famously said “screw the Nano” about the product that upstaged the Rokr’s announcement (some people claim Zander was joking, but as one of my friends used to say, at a certain level there are no jokes).
Wired has even written a full postmortem report on the product.
If we accept that the Rokr is indeed a failure, then the next question to ask is why. There are a lot of theories (for example, Wired blames the controlling mentalities of the carriers and Apple itself). But my takeaway is more basic:
Convergence generally sucks.
People have been predicting converged everything for decades, but usually most products don’t converge. Remember converged TV and hi-fi systems? Of course you don’t, neither do I. But I’ve read about them.
And of course you have an all in one stereo system in your home, right? What’s that you say? You bought separate components? But the logic of convergence says you should have merged all of them long ago.
Remember converged PCs and printers? I actually do remember this one, products like the Canon Navi. It put a phone, printer, fax, and PC all together in the same case. After all, you use them all on the same desk, they take up a lot of space, so it makes a ton of sense to converge them all together. People use exactly the same logic today for why you should converge an MP3 player and a phone. And yet the Navi lasted on the market only a little longer than the Rokr is going to.
The sad reality is that converged products fail unless there is almost zero compromise involved in them. It's so predictable that you could call it the First Law of Convergence: If you have to compromise features or price, or if one part of the converged product is more likely to fail than the others (requiring you to throw out the whole box), forget about it. The only successful converged tech products I can think of today are scanner/fax/printers. They’re cheap, don’t force much of a feature compromise, and as far as I can tell they almost never fail. But they are the exception rather than the rule.
(By the way, I don’t count cameraphones as a successful converged product because they’re driving a new more casual form of photography rather than replacing traditional cameras.)
Looked at from this perspective, the Rokr was doomed because of its compromises. Too few songs, the UI ran too slow, the price was too high. You won’t see a successful converged music phone unless and until it works just like an iPod and doesn’t carry a price premium.
The other lesson of the Rokr failure is that if you do a high-profile launch of a mediocre product, you’ll just accelerate the speed at which it tanks. If Motorola had done a low-key launch of the Rokr and had positioned it as an experiment, there might have been time to quietly tweak the pricing and figure out a better marketing pitch. But now that 49,700 websites have labeled the product a failure, rescuing it will be much, much harder.
I don’t want to pick on Motorola, but the speed at which its two-month-old product was labeled a failure is fascinating -- and a great object lesson for companies that want to play in the mobile space. Here are some thoughts.
First off, it’s hard to be certain that the Rokr actually is a failure, since there are no official industry stats on phone sales by model. But the circumstantial evidence is pretty damning. Most importantly, Cingular cut the phone’s price by $100 in early November. I can tell you from personal experience that no US hardware company ever introduces a device expecting to cut its price just a couple of months after launch. It causes too many logistical problems, and pisses off your early buyers.
Also, several reporters have noticed that Motorola and Apple both gave very telling comments about the product. Steve Jobs called it “a way to put our toe in the water,” which is about as tepid an endorsement as you can get. Ed Zander famously said “screw the Nano” about the product that upstaged the Rokr’s announcement (some people claim Zander was joking, but as one of my friends used to say, at a certain level there are no jokes).
Wired has even written a full postmortem report on the product.
If we accept that the Rokr is indeed a failure, then the next question to ask is why. There are a lot of theories (for example, Wired blames the controlling mentalities of the carriers and Apple itself). But my takeaway is more basic:
Convergence generally sucks.
People have been predicting converged everything for decades, but usually most products don’t converge. Remember converged TV and hi-fi systems? Of course you don’t, neither do I. But I’ve read about them.
And of course you have an all in one stereo system in your home, right? What’s that you say? You bought separate components? But the logic of convergence says you should have merged all of them long ago.
Remember converged PCs and printers? I actually do remember this one, products like the Canon Navi. It put a phone, printer, fax, and PC all together in the same case. After all, you use them all on the same desk, they take up a lot of space, so it makes a ton of sense to converge them all together. People use exactly the same logic today for why you should converge an MP3 player and a phone. And yet the Navi lasted on the market only a little longer than the Rokr is going to.
The sad reality is that converged products fail unless there is almost zero compromise involved in them. It's so predictable that you could call it the First Law of Convergence: If you have to compromise features or price, or if one part of the converged product is more likely to fail than the others (requiring you to throw out the whole box), forget about it. The only successful converged tech products I can think of today are scanner/fax/printers. They’re cheap, don’t force much of a feature compromise, and as far as I can tell they almost never fail. But they are the exception rather than the rule.
(By the way, I don’t count cameraphones as a successful converged product because they’re driving a new more casual form of photography rather than replacing traditional cameras.)
Looked at from this perspective, the Rokr was doomed because of its compromises. Too few songs, the UI ran too slow, the price was too high. You won’t see a successful converged music phone unless and until it works just like an iPod and doesn’t carry a price premium.
The other lesson of the Rokr failure is that if you do a high-profile launch of a mediocre product, you’ll just accelerate the speed at which it tanks. If Motorola had done a low-key launch of the Rokr and had positioned it as an experiment, there might have been time to quietly tweak the pricing and figure out a better marketing pitch. But now that 49,700 websites have labeled the product a failure, rescuing it will be much, much harder.
Google offers WiFi to Mountain View
Posted by
Andy
at
1:08 PM
I wrote last month that I thought Google was likely to offer to install free WiFi in more Bay Area cities. Now the company has offered to do just that in Mountain View (a city north of San Jose and site of Google’s headquarters).
You can view the Mountain View city manager’s summary of the proposal, and a letter from Google, in a PDF file here. A couple of interesting tidbits:
The city manager writes: “Deployment in Mountain View is considered a test network for Google to learn…future possible deployment to other cities and in other countries.” (My emphasis.) I wonder where Mountain View got the idea that Google wants to deploy WiFi outside the Bay Area.
Google writes: “We believe that free (or very cheap) Internet access is a key to bridging the digital divide and providing access to underprivileged and less served communities.” Okay, I believe that too -- but if you know Mountain View you’ll know that the main digital divide there is between people who have DSL and people who have cable modems. If you want to bridge a real digital divide, offer WiFi for someplace like Oakland or East Palo Alto. But then you’d probably need to offer those people computers as well.
Google wrote: “In our self-interest, we believe that giving more people the ability to access the Internet will drive more traffic to Google and hence more revenue to Google and its partner websites.” The San Jose Mercury-News called that “unusual candor,” but I’d call it an understatement. If Google had said, “we’re planning to create a bunch of new for-fee services that we’ll promote like maniacs through the landing page,” that would have been unusual candor. As Mountain View’s summary states, “the agreement does allow Google in the future to charge a fee for enhanced services.”
I think it would be healthy for Google to come clean about this stuff. There’s nothing cities and users have to fear from new Google’s new services, as long as they Google doesn’t create a closed garden, and Google promises to keep the network open. (Although you would need a Google ID to log in, and the service would take you first to a Google landing page.)
My questions: When will Google make that offer to the city where I live, San Jose? And why in the world is the San Jose City Council planning to spend $100,000 down and $60,000 a year of taxpayer money to build a WiFi network serving a small chunk of downtown when Google’s offering to serve whole cities for free?
You can view the Mountain View city manager’s summary of the proposal, and a letter from Google, in a PDF file here. A couple of interesting tidbits:
The city manager writes: “Deployment in Mountain View is considered a test network for Google to learn…future possible deployment to other cities and in other countries.” (My emphasis.) I wonder where Mountain View got the idea that Google wants to deploy WiFi outside the Bay Area.
Google writes: “We believe that free (or very cheap) Internet access is a key to bridging the digital divide and providing access to underprivileged and less served communities.” Okay, I believe that too -- but if you know Mountain View you’ll know that the main digital divide there is between people who have DSL and people who have cable modems. If you want to bridge a real digital divide, offer WiFi for someplace like Oakland or East Palo Alto. But then you’d probably need to offer those people computers as well.
Google wrote: “In our self-interest, we believe that giving more people the ability to access the Internet will drive more traffic to Google and hence more revenue to Google and its partner websites.” The San Jose Mercury-News called that “unusual candor,” but I’d call it an understatement. If Google had said, “we’re planning to create a bunch of new for-fee services that we’ll promote like maniacs through the landing page,” that would have been unusual candor. As Mountain View’s summary states, “the agreement does allow Google in the future to charge a fee for enhanced services.”
I think it would be healthy for Google to come clean about this stuff. There’s nothing cities and users have to fear from new Google’s new services, as long as they Google doesn’t create a closed garden, and Google promises to keep the network open. (Although you would need a Google ID to log in, and the service would take you first to a Google landing page.)
My questions: When will Google make that offer to the city where I live, San Jose? And why in the world is the San Jose City Council planning to spend $100,000 down and $60,000 a year of taxpayer money to build a WiFi network serving a small chunk of downtown when Google’s offering to serve whole cities for free?
Web 3.0
Posted by
Andy
at
12:06 PM
Or, why Web 2.0 doesn't cut it for mobile devices
One of the hottest conversations among the Silicon Valley insider crowd is Web 2.0. A number of big companies are pushing Web 2.0-related tools, and there’s a big crop of Web 2.0 startups. You can also find a lot of talk of “Bubble 2.0” among the more cautious observers.
It’s hard to get a clear definition of what Web 2.0 actually is. Much of the discussion has centered on the social aspirations of some of the people promoting it, a topic that I’ll come back to in a future post. But when you look at Web 2.0 architecturally, in terms of what’s different about the technology, a lot of it boils down to a simple idea: thicker clients.
A traditional web service is a very thin client -- the Browser displays images relayed by the server, and every significant user action goes back to the server for processing. The result, even on a high-speed connection, is online applications that suck bigtime when you start to do any significant level of user interaction. Most of us have probably had the experience of using a Java-enabled website to do some content-editing or other task. The experience often reminds me of using GEM in 1987, only GEM was a lot more responsive.
The experience isn’t just unpleasant -- it’s so bad that non-geeks are unlikely to tolerate it for long. It’s a big barrier to use of more sophisticated Web applications.
Enter Web 2.0, whose basic technical idea is to put a user interaction software layer on the client, so the user gets quick response to basic clicks and data entry. The storage and retrieval of data is conducted asynchronously in the background, so the user doesn’t have to wait for the network.
In other words, a thicker client. That makes sense to me -- for a PC.
Where Web 2.0 doesn’t make sense is for mobile devices, because the network doesn’t work the same way. For a PC, connectivity is an assumed thing. It may be slow sometimes (which is why you need Web 2.0), but it’s always there.
Mobile devices can’t assume that a connection will always be available. People go in and out of coverage unpredictably, and the amount of bandwidth available can surge for a moment and then dry up (try using a public WiFi hotspot in San Francisco if you want to get a feel for that). The same sort of thing can happen on cellular networks (the data throughput quoted for 2.5G and 3G networks almost always depends on on standing under a cell tower, and not having anyone else using data on that cell).
The more people start to depend on their web applications, the more unacceptable these outages will be. That’s why I think mobile web applications need a different architecture -- they need both a local client and a local cache of the client data, so the app can be fully functional even when the user is out of coverage. Call it Web 3.0.
That’s the way RIM works -- it keeps a local copy of your e-mail inbox, so you can work on it at any time. When you send a message, it looks to you as if you’ve sent it to the network, but actually it just goes to an internal cache in the device, where the message sits until a network connection is available. Same thing with incoming e-mail -- it sits in a cache on a server somewhere until your device is ready to receive.*
The system looks instantaneous to the user, but actually that’s just the local cache giving the illusion of always-on connectivity.
This is the way all mobile apps should work. For example, a mobile browser should keep a constant cache of all your favorite web pages (for starters, how about all the ones you’ve bookmarked?) so you can look at them anytime. We couldn’t have done this sort of trick on a mobile device five years ago, but with the advent of micro hard drives and higher-speed USB connectors, there’s no excuse for not doing it.
Of course, once we’ve put the application logic on the device, and created a local cache of the data, what we’ve really done is create a completely new operating system for the device. Thsat's another subject I'll come back to in a future post.
_______________
*This is an aside, but I tried to figure out one time exactly where an incoming message gets parked when it’s waiting to be delivered to your RIM device. Is it on a central RIM server in Canada somewhere, or does it get passed to a carrier server where it waits for delivery? I never was able to figure it out; please post a reply if you have the answer. The reason I wondered was because I wanted to compare the RIM architecture to what Microsoft’s doing with mobile Exchange. In Microsoft’s case, the message sits on your company’s Exchange server. If the server knows your device is online and knows the address for it, it forwards the message right away. Otherwise, it waits for the device to check in and announce where it is. So Microsoft’s system is a mix of push and pull. I don’t know if that’s a significant competitive difference from the way RIM works.
One of the hottest conversations among the Silicon Valley insider crowd is Web 2.0. A number of big companies are pushing Web 2.0-related tools, and there’s a big crop of Web 2.0 startups. You can also find a lot of talk of “Bubble 2.0” among the more cautious observers.
It’s hard to get a clear definition of what Web 2.0 actually is. Much of the discussion has centered on the social aspirations of some of the people promoting it, a topic that I’ll come back to in a future post. But when you look at Web 2.0 architecturally, in terms of what’s different about the technology, a lot of it boils down to a simple idea: thicker clients.
A traditional web service is a very thin client -- the Browser displays images relayed by the server, and every significant user action goes back to the server for processing. The result, even on a high-speed connection, is online applications that suck bigtime when you start to do any significant level of user interaction. Most of us have probably had the experience of using a Java-enabled website to do some content-editing or other task. The experience often reminds me of using GEM in 1987, only GEM was a lot more responsive.
The experience isn’t just unpleasant -- it’s so bad that non-geeks are unlikely to tolerate it for long. It’s a big barrier to use of more sophisticated Web applications.
Enter Web 2.0, whose basic technical idea is to put a user interaction software layer on the client, so the user gets quick response to basic clicks and data entry. The storage and retrieval of data is conducted asynchronously in the background, so the user doesn’t have to wait for the network.
In other words, a thicker client. That makes sense to me -- for a PC.
Where Web 2.0 doesn’t make sense is for mobile devices, because the network doesn’t work the same way. For a PC, connectivity is an assumed thing. It may be slow sometimes (which is why you need Web 2.0), but it’s always there.
Mobile devices can’t assume that a connection will always be available. People go in and out of coverage unpredictably, and the amount of bandwidth available can surge for a moment and then dry up (try using a public WiFi hotspot in San Francisco if you want to get a feel for that). The same sort of thing can happen on cellular networks (the data throughput quoted for 2.5G and 3G networks almost always depends on on standing under a cell tower, and not having anyone else using data on that cell).
The more people start to depend on their web applications, the more unacceptable these outages will be. That’s why I think mobile web applications need a different architecture -- they need both a local client and a local cache of the client data, so the app can be fully functional even when the user is out of coverage. Call it Web 3.0.
That’s the way RIM works -- it keeps a local copy of your e-mail inbox, so you can work on it at any time. When you send a message, it looks to you as if you’ve sent it to the network, but actually it just goes to an internal cache in the device, where the message sits until a network connection is available. Same thing with incoming e-mail -- it sits in a cache on a server somewhere until your device is ready to receive.*
The system looks instantaneous to the user, but actually that’s just the local cache giving the illusion of always-on connectivity.
This is the way all mobile apps should work. For example, a mobile browser should keep a constant cache of all your favorite web pages (for starters, how about all the ones you’ve bookmarked?) so you can look at them anytime. We couldn’t have done this sort of trick on a mobile device five years ago, but with the advent of micro hard drives and higher-speed USB connectors, there’s no excuse for not doing it.
Of course, once we’ve put the application logic on the device, and created a local cache of the data, what we’ve really done is create a completely new operating system for the device. Thsat's another subject I'll come back to in a future post.
_______________
*This is an aside, but I tried to figure out one time exactly where an incoming message gets parked when it’s waiting to be delivered to your RIM device. Is it on a central RIM server in Canada somewhere, or does it get passed to a carrier server where it waits for delivery? I never was able to figure it out; please post a reply if you have the answer. The reason I wondered was because I wanted to compare the RIM architecture to what Microsoft’s doing with mobile Exchange. In Microsoft’s case, the message sits on your company’s Exchange server. If the server knows your device is online and knows the address for it, it forwards the message right away. Otherwise, it waits for the device to check in and announce where it is. So Microsoft’s system is a mix of push and pull. I don’t know if that’s a significant competitive difference from the way RIM works.
How not to market a smartphone
Posted by
Andy
at
10:09 PM
The November 7 issue of BusinessWeek features this full-page ad for the LG VX9800 smart phone, which is currently available through Verizon.
The screen shows what looks like a video feed of a football game, and the “remote not included” headline implies that it’s a video product. But the presence of a keyboard implies e-mail, and look at the background of the photograph -- the phone is sitting on what looks like a polished granite table, and out the window you can see tall buildings, viewed from a height. It looks like we’re up in a corporate executive’s office.
So who’s this phone really for?
The text of the ad doesn’t help: “Now you can watch, listen, and enjoy all your favorite multimedia contents and exchange e-mails without missing a single call. With its sleek design, clarity of a mega-pixel camera, sounds of an audio player, easy-to-use QWERTY keypad, it’s the new mobile phone from LG!”
It’s just a feature list -- and, by the way, a feature list that reads like it was badly translated from Korean.
There’s no sense of who the product’s for, or what problems it’s supposed to solve in that customer’s life. Basically, this is a phone for geeks like me who enjoy playing with technology. And we all know what a big market that is -- we’re the people who made the Sony Clie a raging commercial success.
An added difficulty is that the features LG talks about don’t necessarily work the way you’d expect. The video service that comes with the phone shows only a small number of short clips, to get Outlook e-mail you have to run a redirector on your desktop computer, and the phone doesn’t even have a web browser. You can learn more in this PC Magazine review.
But I’m not all that concerned about customers being disappointed, because with this sort of feature-centric advertising, very few people are going to buy the phone anyway.
The ironic thing is that the people at LG are smarter than this. I’ve met with their smartphone folks. They’re bright, they learn fast, and LG definitely knows how to make cool hardware. But somewhere along the way they’re just not connecting with actual user needs.
Just like most of the other companies making smartphones today.
Epilog: Tonight (11/3) I saw that LG has created a television ad for the phone. We've now confirmed that the target user is a young guy with an untucked shirt and cool-looking girlfriend. Sounds like the geek aspirational market to me. Oh, and most of the commercial is...a feature list.
Google Base: Is eBay really the target?
Posted by
Andy
at
10:59 PM
There has already been a ton of commentary on the “leak” of information about Google Base, a Google service in development that would let people post their content on a Google database. Most of the speculation I've seen has positioned Google Base as a classified ad service that would let people sell things online, competing with eBay.
But that’s not what I read into the service. What I think I see is a freeform database, a free-of-charge data publishing service. Right now if you want to post information to the Internet, you need to create a website or post it on someone else’s site. I think this Google service would make it a lot easier for people to share content if they want to. That in itself will drive the placement of more information into the online world, which is a basic Google value.
To me, the most interesting commercial opportunity around Google Base would be if Google tied it to a payment system, so people could charge for the content they post. That would turn Google Base into a self-publishing platform for software developers, artists, authors, and other content creators.
If I’m right, the real target isn’t eBay or the classified ad services, but the commerce engines that content creators have to license separately today if they want to sell their stuff online. I think Google could do a lot to streamline that process, and the result would be a lot more sales of specialized content online. Speaking as someone who has a side business of creating content (consulting reports and a book on the way), I think a Google service like that would be hot.
I have no idea if Google’s planning to create this sort of commerce engine, but it’s what I’d do if I were in their shoes.
But that’s not what I read into the service. What I think I see is a freeform database, a free-of-charge data publishing service. Right now if you want to post information to the Internet, you need to create a website or post it on someone else’s site. I think this Google service would make it a lot easier for people to share content if they want to. That in itself will drive the placement of more information into the online world, which is a basic Google value.
To me, the most interesting commercial opportunity around Google Base would be if Google tied it to a payment system, so people could charge for the content they post. That would turn Google Base into a self-publishing platform for software developers, artists, authors, and other content creators.
If I’m right, the real target isn’t eBay or the classified ad services, but the commerce engines that content creators have to license separately today if they want to sell their stuff online. I think Google could do a lot to streamline that process, and the result would be a lot more sales of specialized content online. Speaking as someone who has a side business of creating content (consulting reports and a book on the way), I think a Google service like that would be hot.
I have no idea if Google’s planning to create this sort of commerce engine, but it’s what I’d do if I were in their shoes.
Helio talks the right story
Posted by
Andy
at
10:01 PM
Helio is the new name of the MVNO being created by SK Telecom and Earthlink. The name itself isn’t worth a posting (although I’m always happy when a company successfully gets a new brand, considering how hard it is to get legal clearance). But I like the story Helio’s telling about its target market.
Helio says it’s going to target young people with phones enhanced for music, games, video, and other entertainment. I like that they claim they’re working on both the hardware and the software together, because that’s the right way to create a successful mobile device. And I know from the research I’ve been involved in that there’s a substantial market of young people who are willing to pay extra for phones that also keep them entertained. But there are two catches.
The first (and it’s an important one) is that most of these young people, because they’re young, don’t have a huge amount of spare cash. It’ll be interesting to see how Helio balances the budgets of its target customers against the high revenue it says it’ll generate.
The second catch is that, although many young people do want phones that entertain them, most of them are not at all willing to pay extra for technology for its own sake. The products have to be iTunes/iPod-quality, or Helio won’t live up to expectations. This is sometimes a hard task for Asian companies, whose domestic markets are a little more willing to buy gadgets just because they’re cool.
Still, I hope Helio can pull it off.
Helio says it’s going to target young people with phones enhanced for music, games, video, and other entertainment. I like that they claim they’re working on both the hardware and the software together, because that’s the right way to create a successful mobile device. And I know from the research I’ve been involved in that there’s a substantial market of young people who are willing to pay extra for phones that also keep them entertained. But there are two catches.
The first (and it’s an important one) is that most of these young people, because they’re young, don’t have a huge amount of spare cash. It’ll be interesting to see how Helio balances the budgets of its target customers against the high revenue it says it’ll generate.
The second catch is that, although many young people do want phones that entertain them, most of them are not at all willing to pay extra for technology for its own sake. The products have to be iTunes/iPod-quality, or Helio won’t live up to expectations. This is sometimes a hard task for Asian companies, whose domestic markets are a little more willing to buy gadgets just because they’re cool.
Still, I hope Helio can pull it off.
A modern marriage proposal
Posted by
Andy
at
9:14 PM
This has almost nothing to do with mobile computing, but I think it's cool and wanted to share it. A longtime friend and co-worker of mine proposed to his girlfriend this week. Like any good technologist, he found a Web-assisted way to do it. Check out his marriage proposal website.
By the way, she said yes.
By the way, she said yes.
What does Google want?
Posted by
Andy
at
4:15 PM
I’ve been doing a lot of networking in the last couple of months, meeting new people and getting in touch with old friends and co-workers. It’s fun to have the time to share ideas again, after being heads-down with Palm for six years.
Most of the conversations eventually come around to the question, “What does Google want?” It’s a great topic because Google has enough money, and is ambitious enough, that it might be planning to do almost anything. Google is also deliberately coy about its intentions, creating a sort of giant corporate ink blot test. Our theories about it may say more about our own desires than they do about Google itself.
Here are the three leading theories I’m hearing:
Theory 1. Google wants to control the ultimate OS. In this perspective, Google views Microsoft as its most important competitor/target, and is carefully executing a long-term plan to make the Windows/Office monopoly irrelevant. By creating more and more programming interfaces to its services, Google is causing applications development to gradually shift away from the PC’s APIs to those embodied in servers on the network. Windows doesn’t necessarily disappear, but it stops being the control point for computing innovation.
It’s kind of like Sun’s old slogan, “The Network is the Computer,” except in this case someone’s actually making it happen.
Some people view Google’s recent semi-endorsement of Sun OpenOffice as proof that Google’s trying to undercut Microsoft Office. You can get a fairly enthusiastic account from InformationWeek.
Theory 2. Google wants to destroy all the carriers. In this view, Google’s main priority is to take over the transport of content and information, rather than just organizing it. Every company that distributes content and information -- phone companies, television networks, cable companies, and so on -- is a target as Google seeks to deliver everything through the Internet.
The prime exhibit in this theory is Google’s purchase of dark fiber resources. Combine that with Google’s experiments around video downloading and Google Talk, and you can draw a scenario in which Google uses the Internet to take down all of the middlemen who carry all of our entertainment, information, and communication.
I think this scenario is especially appealing to many people in Silicon Valley because of the visceral dislike that so many of us in the tech industry have toward carriers of any sort. I should probably do a whole article on why this is, but the short summary is that carriers get in the way of things that many tech companies would like to do. A lot of people in Silicon Valley would be very happy -- like, fall of the Berlin Wall happy -- if the carriers just disappeared some day.
Their hope is that Google’s going to make it happen.
Theory 3. Google is making it up as it goes along. Those of us who have worked in large, visible companies know how good people on the Internet are at making up conspiracy theories. At Apple and Palm I used to shake my head in amazement when some commentator came up with an amazing master plot that linked several completely independent things our company was doing, and presented them as a single conspiracy.
We were never sufficiently clever or organized enough to pull off most of the plots that were attributed to us. Sometimes big companies do things because there’s a plan, but just as often they do them because of internal politics or random unconnected ideas. In my experience the broader the supposed conspiracy, the more groups and business units it links, the less likely that there’s an actual plan.
Amid all the rumors about Google, one thing I know to be a fact is that it deliberately hires the best computing graduates in order to keep them off the street. Google’s founders came out of academia to steal a march on the search leaders, and they’re very worried that someone might do the same to them. The easiest way to prevent that is to hire all the brightest computing grads and put them to work. It doesn’t really matter what they do, as long as they do it for Google rather than someone else.
In this view, much of Google is more like a huge research lab rather than a traditional company. Sure it’s experimenting with VOIP and video downloading -- it experiments with everything. But that doesn’t mean all the experiments are controlled by a single master plan.
This theory isn’t nearly as popular online as the others, but I heard it from a very experienced consultant and technologist in the valley (who I won’t name because it might screw up his ability to do business with Google). It’s also the theory I believed -- until recently.
My opinion: It’s theory number 2.
What turned me around was Google’s recent proposal to blanket San Francisco with WiFi. Although the Google proposal is far short of a formal bid, the fact that they made it at all says a lot to me. They’re willing to put their brand and reputation on the line for a huge fixed infrastructure of wireless base stations, and all the customer support headaches that would go along with them. That’s a much different business model than Google has used in the past, and it’s not something a company would ever propose lightly. There’s no way this is a random experiment.
Now for my speculation. I don’t think Google would do this only for the Bay Area, and I don’t think local advertising would produce enough of a return on investment to justify the cost and risk associated with creating one of these networks. I think it’s a trade-up play -- you give away the 300 kbps service and then charge for a series of add-on services on top of it. Voice telephony (replace your landline), and video download (replace your cable TV company) are two obvious ones because there’s an established market for both of them that Google can cannibalize quickly. Plus of course they’ll trash the DSL business.
If I’m right, we should expect to see Google start dabbling in the creation of other services that it could drive over this network. They should also offer to un-wire other cities in the Bay Area.
Unfortunately for those who want the cellphone carriers to go away, I think the current technical limitations on WiFi phones -- standby battery life a tenth of a cellphone -- will make the wireless voice carriers the last domino to fall. But I believe Google is truly gunning for everyone else.
Most of the conversations eventually come around to the question, “What does Google want?” It’s a great topic because Google has enough money, and is ambitious enough, that it might be planning to do almost anything. Google is also deliberately coy about its intentions, creating a sort of giant corporate ink blot test. Our theories about it may say more about our own desires than they do about Google itself.
Here are the three leading theories I’m hearing:
Theory 1. Google wants to control the ultimate OS. In this perspective, Google views Microsoft as its most important competitor/target, and is carefully executing a long-term plan to make the Windows/Office monopoly irrelevant. By creating more and more programming interfaces to its services, Google is causing applications development to gradually shift away from the PC’s APIs to those embodied in servers on the network. Windows doesn’t necessarily disappear, but it stops being the control point for computing innovation.
It’s kind of like Sun’s old slogan, “The Network is the Computer,” except in this case someone’s actually making it happen.
Some people view Google’s recent semi-endorsement of Sun OpenOffice as proof that Google’s trying to undercut Microsoft Office. You can get a fairly enthusiastic account from InformationWeek.
Theory 2. Google wants to destroy all the carriers. In this view, Google’s main priority is to take over the transport of content and information, rather than just organizing it. Every company that distributes content and information -- phone companies, television networks, cable companies, and so on -- is a target as Google seeks to deliver everything through the Internet.
The prime exhibit in this theory is Google’s purchase of dark fiber resources. Combine that with Google’s experiments around video downloading and Google Talk, and you can draw a scenario in which Google uses the Internet to take down all of the middlemen who carry all of our entertainment, information, and communication.
I think this scenario is especially appealing to many people in Silicon Valley because of the visceral dislike that so many of us in the tech industry have toward carriers of any sort. I should probably do a whole article on why this is, but the short summary is that carriers get in the way of things that many tech companies would like to do. A lot of people in Silicon Valley would be very happy -- like, fall of the Berlin Wall happy -- if the carriers just disappeared some day.
Their hope is that Google’s going to make it happen.
Theory 3. Google is making it up as it goes along. Those of us who have worked in large, visible companies know how good people on the Internet are at making up conspiracy theories. At Apple and Palm I used to shake my head in amazement when some commentator came up with an amazing master plot that linked several completely independent things our company was doing, and presented them as a single conspiracy.
We were never sufficiently clever or organized enough to pull off most of the plots that were attributed to us. Sometimes big companies do things because there’s a plan, but just as often they do them because of internal politics or random unconnected ideas. In my experience the broader the supposed conspiracy, the more groups and business units it links, the less likely that there’s an actual plan.
Amid all the rumors about Google, one thing I know to be a fact is that it deliberately hires the best computing graduates in order to keep them off the street. Google’s founders came out of academia to steal a march on the search leaders, and they’re very worried that someone might do the same to them. The easiest way to prevent that is to hire all the brightest computing grads and put them to work. It doesn’t really matter what they do, as long as they do it for Google rather than someone else.
In this view, much of Google is more like a huge research lab rather than a traditional company. Sure it’s experimenting with VOIP and video downloading -- it experiments with everything. But that doesn’t mean all the experiments are controlled by a single master plan.
This theory isn’t nearly as popular online as the others, but I heard it from a very experienced consultant and technologist in the valley (who I won’t name because it might screw up his ability to do business with Google). It’s also the theory I believed -- until recently.
My opinion: It’s theory number 2.
What turned me around was Google’s recent proposal to blanket San Francisco with WiFi. Although the Google proposal is far short of a formal bid, the fact that they made it at all says a lot to me. They’re willing to put their brand and reputation on the line for a huge fixed infrastructure of wireless base stations, and all the customer support headaches that would go along with them. That’s a much different business model than Google has used in the past, and it’s not something a company would ever propose lightly. There’s no way this is a random experiment.
Now for my speculation. I don’t think Google would do this only for the Bay Area, and I don’t think local advertising would produce enough of a return on investment to justify the cost and risk associated with creating one of these networks. I think it’s a trade-up play -- you give away the 300 kbps service and then charge for a series of add-on services on top of it. Voice telephony (replace your landline), and video download (replace your cable TV company) are two obvious ones because there’s an established market for both of them that Google can cannibalize quickly. Plus of course they’ll trash the DSL business.
If I’m right, we should expect to see Google start dabbling in the creation of other services that it could drive over this network. They should also offer to un-wire other cities in the Bay Area.
Unfortunately for those who want the cellphone carriers to go away, I think the current technical limitations on WiFi phones -- standby battery life a tenth of a cellphone -- will make the wireless voice carriers the last domino to fall. But I believe Google is truly gunning for everyone else.
The myth of the smartphone market
Posted by
Andy
at
3:50 PM
Who will buy smartphones? And what are the “killer” features?
One of the most common themes among people watching the mobile market is the quest for the ultimate device. “Which is the one everyone will buy in the future?” reporters ask me. Discussion boards have endless debates over the relative merits of the Treo, Blackberry, Microsoft Smartphone, and so on. The underlying assumption is that at some point we’ll see the emergence of one converged killer device that gets universally adopted.
I’m not sure why we’re all looking for one ultimate winner. Maybe it’s a hangover from the PC market, where one basic design did dominate the market. (Please, no angry messages from Mac or Linux users -- I was at Apple for ten years and I’m not ever going to write off the Mac. But you gotta admit there was a winner, folks.)
Anyway, I think the PC market is not a good analog for the mobile world. We need to cast aside our PC assumptions, including the assumption that there’s going to be a single unified mobile market.
At PalmSource we did a lot of research on mobile customers and what they want. The basic outlines of what we found were released to the public, so it’s okay to talk about them. Here goes:
About 60% of mobile phone users in the US and the major European countries are unwilling to pay extra for anything other than basic voice and SMS. In the US they typically take a cheap phone with a low-cost service plan, while in Europe they tend to be on pay as you go plans that let them limit their billing very carefully. They’ll even turn off their phones sometimes to limit the number of calls they take.
If you give them a phone with free features, they’ll accept it, of course. But what makes them distinct is that they won’t pay extra to use those features.
For an example of this effect, look at the high sales of subsidized cameraphones, and compare that to the low number of people who pay to send lots of MMS messages containing those photos.
Maybe someday it’ll be possible to coax these people into doing more, but they’ll be the last adopters, so you can forget about selling them anything advanced right now.
Three value-added segments
The good news is that about 35%-40% of mobile phone users are willing to pay extra for additional features beyond voice and SMS. With worldwide mobile phone sales running at well over 600 million units a year, that means you could sell more than 200 million advanced phones a year. Not a bad market, and far beyond today’s sales of smartphones, which are running at somewhere between six million and 25 million units a year depending on how generously you define smartphone.
Unfortunately, these people don’t all want the same advanced features in their phones. They split into three different market segments, each about 12% of the population, with very different needs and demographics. I think there are probably also a lot of sub-segments within each of the major segments.
The first segment is a group of people I like to call communication enthusiasts. These are extroverts who live to communicate with other human beings, and they’re often in people-facing jobs like sales and business development. To picture this user, think of the best sales representative you’ve ever met, warm and enthusiastic and always ready to chat.
These people are willing to pay for any advanced phone feature that’ll help them communicate better. E-mail, short messaging, IM, video calls, whatever. I think they’re the main people buying Blackberries and Treos.
The second segment is information enthusiasts. These people are a little more introverted, and tend to be in information-heavy jobs like medicine, law, and research. They need a tool that helps them manage all that information. Think of a doctor, trying to keep track of patient records and reference information on thousands of drugs.
The information enthusiasts will pay extra for features that extend their memory and help them work with information. Databases, larger screens, reading PC documents, and running lots of third party apps. Right now I don’t think anyone’s designing an ideal mobile phone for them (the Motorola A780 is the right hardware, but disastrously wrong software). Today a lot of these people buy handhelds instead.
The third segment is entertainment enthusiasts. These users are younger people (late teens and twenties) who want to keep their fun lifestyles even as they enter the workforce. They’ll pay extra for enhanced entertainment features -- music, games, video, fun messaging. They don’t have as much money as the information and communication users, so they can’t pay as much for their phones. There’s some evidence that this market sub-segments into game enthusiasts, music lovers, and so on.
The Danger Hiptop is aimed squarely at this demographic (check out the Snoop commercials), as is the Motorola Rokr (although at a price of $250 to hold only 100 songs, I am deeply skeptical of how well the Rokr will sell).
The misguided drive for convergence
In reaction to these different needs, a lot of people in the industry are trying to create an ultimate converged device that has features appealing to all three groups. So you get smartphones dolled up with e-mail clients, MP-3 players, and loads of information management applications.
These typically don’t sell well, for two reasons. First, unlike a PC, when you add features to a mobile device you pay a heavy price. If a PC gets a little heavier, or uses a little more power, no one will even notice. But do that to a mobile device, and it may suddenly become too heavy for most people to carry, or its battery life may become too short. Tiny differences in specs can create surprisingly huge changes in sales.
The second reason why “Swiss Army Knife” products don’t sell well is because most mobile customers are intensely practical. They buy mobile products like appliances, to do a specific job. All of the most successful mobile products are associated with a particular task that they do well. They may be capable of doing more, but there’s always a lead feature that they excel at. The iPod is fantastic at music acquisition and playback. The Blackberry is great at Exchange e-mail (and stinky at almost everything else). And the original Palm Pilot excelled at calendar and address book.
As far as I can tell, the only place where Swiss Army Knife mobile products are popular is in the online discussion forums that we all read. We technophiles, we few proud pioneers, are utterly out of touch with the needs and desires of normal mobile customers.
There is no smartphone market
What all this means is that there’s no unified smartphone market. Instead, there are a series of markets for phones that are smart at particular tasks. The way to win is not to create one ultimate device; it’s to create a series of products that are great solutions for certain customer groups. The market’s a series of rifle shots, not a shotgun blast.
So the best analogy for the mobile device market isn’t PCs, it’s cars. There is no car market, there’s a market for sports cars, a market for SUVs, a market for sedans, and so on. If we think of the mobile market the same way, we’ll all have happier customers and we’ll sell a lot more products.
__________
Additional reading: Here are a couple of third party reports that explore elements of the smartphone myth. Unfortunately, you have to pay to read these reports, but if you work in a company that can afford it, the investment is worthwhile.
Jupiter Research: How to Succeed in Wireless Without Really Converging
A nice overview that, regrettably, doesn't dig into details on the value-added segments.
Forrester Research: Segmenting Europe's Mobile Consumers
This report is from 2002, but the findings are still valid. It's the best overview you can get, and at $700 it's a deal compared to a lot of other studies.
One of the most common themes among people watching the mobile market is the quest for the ultimate device. “Which is the one everyone will buy in the future?” reporters ask me. Discussion boards have endless debates over the relative merits of the Treo, Blackberry, Microsoft Smartphone, and so on. The underlying assumption is that at some point we’ll see the emergence of one converged killer device that gets universally adopted.
I’m not sure why we’re all looking for one ultimate winner. Maybe it’s a hangover from the PC market, where one basic design did dominate the market. (Please, no angry messages from Mac or Linux users -- I was at Apple for ten years and I’m not ever going to write off the Mac. But you gotta admit there was a winner, folks.)
Anyway, I think the PC market is not a good analog for the mobile world. We need to cast aside our PC assumptions, including the assumption that there’s going to be a single unified mobile market.
At PalmSource we did a lot of research on mobile customers and what they want. The basic outlines of what we found were released to the public, so it’s okay to talk about them. Here goes:
About 60% of mobile phone users in the US and the major European countries are unwilling to pay extra for anything other than basic voice and SMS. In the US they typically take a cheap phone with a low-cost service plan, while in Europe they tend to be on pay as you go plans that let them limit their billing very carefully. They’ll even turn off their phones sometimes to limit the number of calls they take.
If you give them a phone with free features, they’ll accept it, of course. But what makes them distinct is that they won’t pay extra to use those features.
For an example of this effect, look at the high sales of subsidized cameraphones, and compare that to the low number of people who pay to send lots of MMS messages containing those photos.
Maybe someday it’ll be possible to coax these people into doing more, but they’ll be the last adopters, so you can forget about selling them anything advanced right now.
Three value-added segments
The good news is that about 35%-40% of mobile phone users are willing to pay extra for additional features beyond voice and SMS. With worldwide mobile phone sales running at well over 600 million units a year, that means you could sell more than 200 million advanced phones a year. Not a bad market, and far beyond today’s sales of smartphones, which are running at somewhere between six million and 25 million units a year depending on how generously you define smartphone.
Unfortunately, these people don’t all want the same advanced features in their phones. They split into three different market segments, each about 12% of the population, with very different needs and demographics. I think there are probably also a lot of sub-segments within each of the major segments.
The first segment is a group of people I like to call communication enthusiasts. These are extroverts who live to communicate with other human beings, and they’re often in people-facing jobs like sales and business development. To picture this user, think of the best sales representative you’ve ever met, warm and enthusiastic and always ready to chat.
These people are willing to pay for any advanced phone feature that’ll help them communicate better. E-mail, short messaging, IM, video calls, whatever. I think they’re the main people buying Blackberries and Treos.
The second segment is information enthusiasts. These people are a little more introverted, and tend to be in information-heavy jobs like medicine, law, and research. They need a tool that helps them manage all that information. Think of a doctor, trying to keep track of patient records and reference information on thousands of drugs.
The information enthusiasts will pay extra for features that extend their memory and help them work with information. Databases, larger screens, reading PC documents, and running lots of third party apps. Right now I don’t think anyone’s designing an ideal mobile phone for them (the Motorola A780 is the right hardware, but disastrously wrong software). Today a lot of these people buy handhelds instead.
The third segment is entertainment enthusiasts. These users are younger people (late teens and twenties) who want to keep their fun lifestyles even as they enter the workforce. They’ll pay extra for enhanced entertainment features -- music, games, video, fun messaging. They don’t have as much money as the information and communication users, so they can’t pay as much for their phones. There’s some evidence that this market sub-segments into game enthusiasts, music lovers, and so on.
The Danger Hiptop is aimed squarely at this demographic (check out the Snoop commercials), as is the Motorola Rokr (although at a price of $250 to hold only 100 songs, I am deeply skeptical of how well the Rokr will sell).
The misguided drive for convergence
In reaction to these different needs, a lot of people in the industry are trying to create an ultimate converged device that has features appealing to all three groups. So you get smartphones dolled up with e-mail clients, MP-3 players, and loads of information management applications.
These typically don’t sell well, for two reasons. First, unlike a PC, when you add features to a mobile device you pay a heavy price. If a PC gets a little heavier, or uses a little more power, no one will even notice. But do that to a mobile device, and it may suddenly become too heavy for most people to carry, or its battery life may become too short. Tiny differences in specs can create surprisingly huge changes in sales.
The second reason why “Swiss Army Knife” products don’t sell well is because most mobile customers are intensely practical. They buy mobile products like appliances, to do a specific job. All of the most successful mobile products are associated with a particular task that they do well. They may be capable of doing more, but there’s always a lead feature that they excel at. The iPod is fantastic at music acquisition and playback. The Blackberry is great at Exchange e-mail (and stinky at almost everything else). And the original Palm Pilot excelled at calendar and address book.
As far as I can tell, the only place where Swiss Army Knife mobile products are popular is in the online discussion forums that we all read. We technophiles, we few proud pioneers, are utterly out of touch with the needs and desires of normal mobile customers.
There is no smartphone market
What all this means is that there’s no unified smartphone market. Instead, there are a series of markets for phones that are smart at particular tasks. The way to win is not to create one ultimate device; it’s to create a series of products that are great solutions for certain customer groups. The market’s a series of rifle shots, not a shotgun blast.
So the best analogy for the mobile device market isn’t PCs, it’s cars. There is no car market, there’s a market for sports cars, a market for SUVs, a market for sedans, and so on. If we think of the mobile market the same way, we’ll all have happier customers and we’ll sell a lot more products.
__________
Additional reading: Here are a couple of third party reports that explore elements of the smartphone myth. Unfortunately, you have to pay to read these reports, but if you work in a company that can afford it, the investment is worthwhile.
Jupiter Research: How to Succeed in Wireless Without Really Converging
A nice overview that, regrettably, doesn't dig into details on the value-added segments.
Forrester Research: Segmenting Europe's Mobile Consumers
This report is from 2002, but the findings are still valid. It's the best overview you can get, and at $700 it's a deal compared to a lot of other studies.
Subscribe to:
Posts (Atom)