Feed on

Category Archive for 'Business Strategy'

Ordinarily I try to make my posts relevant to some recent development in the media landscape, usually focused on technology or digital media. Today, I’m going to change tack a bit because something has been bothering me. A lot!

Overstock.com rebranded themselves O.co

What were they thinking?! The rebranding effort launched in Q1 of this year and continues today. They’re still running commercials to tell people about their new name/brand. According to AdAge, there are two reasons for the rebranding: 1) the word “overstock” is hard to translate for a global market, and (2) they’re not just dealing in “overstock” items any more.

I can’t comment on the translate-ability of the word “overstock” but their second reason is ludicrous. I, personally, stopped thinking of Overstock.com as a re-seller of excess merchandise a long time ago. They’re an e-retailer, and Overstock is their brand; it’s who they are, not what they do. Brands don’t have to be literal, for crying out loud.

You know, when I first saw the commercials for O.co, I couldn’t help but wonder how much they paid their branding consultants for the advice? You just know there are some consultants out there, laughing all the way to the bank. Instead of coming up with a branding campaign that would highlight some unique value proposition, or teach the consumer what Overstock is all about, they recommended an exceedingly costly rebranding effort. They, I’m sure, charged an exorbitant amount to come up with the new brand, and a half-baked implementation plan. I bet they sold Overstock management on the change by talking about how–in the post-twitter world–shortlinks were gold! This is the kind of bull that gives consulting a bad name.

So now, Overstock.com have spent millions upon millions on advertising their rebranding, but as far as I can tell haven’t committed to it. First, their commercials don’t seem to tell the consumer what O.co is about. Brands are essentially promises to the consumer; when you think of a brand, you should have mental associations with the brand, and the brand should deliver on those association. That’s how you build brand equity. O.co doesn’t seem to be promising anything except a quicker (but less memorable) link. If Overstock really want to make O.co work, they need to start building brand associations with the new name. It won’t be easy, given how generic the name sounds, but if they don’t, O.co will have no relevancy. Without relevancy, Overstock will lose any opportunity for advantage from brand equity.

Second, if you type o.co into your browser, it redirects you to overstock.com. Overstock launched their rebranding campaign back in February, but they still haven’t gotten around to moving the site to their new domain?! That sends a mixed message, and branding is all about consistency. If they really want this rebranding to work, they need to commit.

Finally, not all their marketing is consistently using O.co. For instance, if you use the tweet widget on Overstock.com, the generic tweet will use the full Overstock.com link rather than an O.co link. Again, Overstock.com don’t seem to be truly commited to the rebranding. Consistency is essential to building brand equity.

It seems to me that Overstock.com rushed into this rebranding based on bad advice and without a good plan to pull it off. Their money would have been better spent dealing with fundamental issues in their website: long page load times, website design that’s cluttered and unappealing, etc. Hell. They probably would have been better off giving their money to charity. At least that would have built up some good will (if properly publicized).

Am I seriously the only person bothered by this?! I guess you could say that dimwitted marketing schemes are a pet-peve of mine. This O.co rebranding gets under my skin because it feels wasteful. Yes, it’s way easier to change your name and hope for the best, than to deal with fundamental issues of business strategy and operations, but it’s a much riskier investment. I’m not saying marketing isn’t important. Marketing is essential, but poorly concieved and expensive marketing campaigns are no replacement for genuine strategic advantages, or for branding based on real value propositions.

There are currently a lot of companies duking it out for control over the future of digital gaming. In particular, I’ve been keeping an eye on OnLive since last Christmas when they did a couple high profile ads to hype their launch of the OnLive game system for the TV, a “microconsole” which allows you to play streaming games purchased in the OnLive environment directly on your TV. For those of you who aren’t familiar with OnLive, check out their own explanation of service here. OnLive have been in the news a few times in the past week, and I want to take this opporunity to examine OnLive, their competitors, and what the latest news says about the future of gaming.

Personally, I’m an active Steam user. Steam–a service from Valve, who also releases games like the Orange Box–is a cloud-esque game distribution platform which allows you to buy the digital rights to a game and then download it to any local computer where you are signed into your Steam account. In contrast, with OnLive, you download an app to the local hardware and then stream the game; it’s much closer to true cloud gaming. The obvious advantage to the OnLive approach is that you don’t have to worry about meeting each individual game’s system requirements. In other words, you could have older hardware with out of date graphics drivers, but the game should still work. For publishers, there is another advantage: if the game is never downloaded locally, it is much harder to pirate. The drawback with OnLive is one of bandwidth. In order to play games through OnLive you need enough bandwidth to stream the games without delays and buffering issues, which requires a high speed internet connection of at least 25Mbps. Such high speed internet connections are far from what I would call “affordable.” And even with these higher internet speeds, streaming games usually have worse graphics than their downloaded or hard copy versions; by lowering the resolution of the game, OnLive et al. can reduce the bit rate for delivery and ensure a more seamless stream.

That being said, I genuinely believe the OnLive approach is the direction this industry is heading. And I believe the world of gaming will be the earliest media industry to make a true transition to the cloud. Both the early start (ie. Steam has been up and running since 2007) and the speed of this transition are due to the nature of gaming as fundamentally technology focused. Gamers, especially PC gamers, tend to be early adopters of the latest and greatest technology. Additionally, digital distribution (including cloud-based gaming) arguably offers higher value for the consumer than traditional retail sales of hard-copies. Thus, gamers are transitioning to more and more digital distribution options, where available.

OnLive is well positioned to take advantage of this transition. They’ve made a lot of smart decisions: they’re playing with multiple pricing models and delivery methods and they’re building smart partnerships. I’m impressed. Don’t get me wrong, they’ve still got a lot more to do. They need to keep building their library of titles and they should be partnering even more. These are essential to building the consumer value proposition and driving customer acquisition. OnLive must realize they are dealing with a product with major network effects. That means, each additional consumer using the “network” increases the value of the network itself. In this case, each new user increases the value, for publishers, of selling their games through the network; as more publishers sell their games through the network, it becomes more valuable to consumers to use that network. With this in mind, OnLive needs to really ramp up their customer acquisitions. A large, loyal customer base will be their greatest defense against the competition, especially gaming retail giant GameStop. GameStop has been buying companies in this area hoping to cobble together a strong streaming offering to launch in mid-2012.

With all this in mind, I now want to examine the latest news stories revolving around OnLive in the past week. First, OnLive was in the press last week over a quarrel with GameStop, where the retailer pulled OnLive coupons out of physical copies of the new release Deus Ex. Then, yesterday, OnLive and GameSpot (the blog) announced a partnership to offer free OnLive game demos within GameSpot reviews. Let’s look at each of these in turn.

First, Deus Ex: Human Revolution was released last week on Tuesday, Aug. 23. OnLive had brokered a deal with Square Enix, the publisher, to bundle a coupon for a free digital copy of the game from OnLive within physical copies of the PC game. Since the game is being sold via OnLive for $49.99, the coupon is not an inconsequential value to the consumer. OnLive presumably hoped to use the promotion to increase consumer awareness and drive customer acquisition. For Square Enix, the deal increases the value of their product without any incremental cost. In fact, because digital copies of games are generally sold for as much or nearly as much as hard-copies, game publishers are incentivized to help build up digital sales in a way that book or music publishers are not. Thus, from a publisher’s perspective, they should theoretically prefer digital sales because revenues are just as high and margins are even higher than for physical sales. GameStop, however, has a vested interest in both protecting their physical sales and preventing OnLive gaining too much of a foothold in the streaming market. For this reason, they instructed their staff to open the Deus Ex PC games and remove the coupons–a move that has enraged gamers and raised questions of the firm’s ethicality.

This week, OnLive began partnering with GameSpot, a popular gaming website. GameSpot will now embed links to free demos from OnLive within their game reviews. So, if you’re on GameSpot and reading a review for a new game called… Awesome Sauce, lets say, you might see a link that says: “Awesome Sauce Instant Demo, Play Now!” That will lead you to a page instructing you to install the OnLive app. From there, you’ll need to create an OnLive account. And, hey presto, OnLive has a new customer. If you like the demo and it plays well, you might even buy the game from OnLive, on the spot. The benefits for OnLive are obvious, but the partnership is good for GameSpot too, because it adds a complementary product to their own reviews. The deal also requires GameSpot to provide reviews and other editorial content to OnLive for their own site. Beyond these basic details, I’m curious about the terms of the deal. I’d like to know whether GameSpot gets some percentage of revenues from games bought by people who linked from the GameSpot Demos. I’d also like to know whether the deal gaurantees GameSpot exclusivity for OnLive demos. Anyone out there got some insight?

Ultimately, both of these stories suggest to me that OnLive recognizes the network effects that will be essential to success in streaming games. OnLive has a headstart right now, and they’re doing some good things to take advantage. Nevertheless, they still face a lot of complaints about streaming speed issues, poorer graphics and a small library of games. They need to step it up, or when GameStop finally arrives on the scene, they could get blown away. GameStop surely recognizes how network effects will play a part in the future of streaming games, thus their typical dicky behavior about the Deus Ex coupons. GameStop plays hardball, treats their consumers like crud and makes a lot of money doing it. I’d rather they didn’t win the future of digital gaming. So… Let’s go, OnLive. I’m rooting for you.

The big news of the week is the acquisition of Motorola by Google. It’s a rich topic to delve into because of the implications not just for smartphone manufacturing and operating systems, but for the TV industry and digital advertising as well. There are dozens of angles from which to view the planned acquisition and even more questions and uncertainties to ponder. Will Google keep the manufacturing branch or are they only interested in the IP? Will Google leverage Motorola’s position as a prime manufacturer of cable set-top boxes to re-invigorate their Google TV business? I particularly recommend Ben Par’s analysis of some of the key issues facing the merger, though I don’t agree with all his assertions. Nevertheless, what I’m left wondering about, and I’m surprised people don’t seem to be talking about, is whether this acquisition represents a quintessential shift in Google’s strategy in the mobile marketplace?

To date, Google’s only true revenue success has been selling paid search and digital advertising. This focus on ad revenues continued to be centric to Google’s strategy when they looked toward the mobile industry. The Android ecosystem was designed to be completely open and accessible, even writing the code as open-source. The purpose of an open system was to capture as much of the mobile OS market share as possible in a very short amount of time. An open ecosystem encourages not just manufacturers but also App developers to work within the ecosystem. And the more people develop within the ecosystem, the more valuable it is to the consumer. Similarly, the more consumers in the ecosystem, the more valuable it is to developers. It’s a classic example of network effects. This is why Google offers use of the Android OS free-of-charge to OEMs (original equipment manufacturers) like Samsung and HTC. Basically, Google’s strategy involved making a land grab for market share in the Mobile OS space and then using this share to capture the mobile search market. If Google controls the mobile search market, which is growing at an astronomical rate, they will own a lot of the revenue potential from paid search on mobile devices.

Now, however, Google risks alienating their OEM parters. The purchase of Motorola (who was one of Google’s most important OEM partners, with their enviable Droid phone) represents a move to forward integrate into the manufacturing space, and thus directly compete with their other OEMs. These other OEMs have offered luke-warm support in favor of Google’s move, hoping the additional IP from Motorola will give Google leverage in the ongoing IP wars with Apple. Particularly, HTC has been entrenched in vicious IP suits with Apple, who name Google as a co-defendent, over functionality in HTC’s adroid smartphones. Such IP disputes are dragging down the profitability of many competitors in the smart phone market and this merger may give Google some leverage to help broker a peace treaty.

But don’t assume that Samsung and HTC aren’t worried about Google’s move into their market space. This move into manufacturing–assuming Google doesn’t spin off the manufacturing branch–could represent a very real shift in Google’s strategy for the Mobile marketplace. For years, Google have cited their dependence on paid search for their revenue health as one of their biggest risks. [Nevermind that investors tend to prefer companies that aren’t diversified.] The management has long believed they needed to build other reliable revenue streams beyond advertising. To this end, perhaps Google believes Motorola is the answer to their prayers. Perhaps they will use this acquisition as a turning point in their mobile strategy, opting to eventually own the Android ecosystem as a more closed environment, similar to Apple. They already started making Android more closed last year by putting more restrictive rules on how OEMs could use and alter Android’s open-source code. What if Google will now seek to control the full value chain for their smart phones?

Well, who knows, but you can bet Android OEMs are afraid of that very thing. I almost guarantee that Samsung et al. will be working on new defensive strategies and contingency plans. In fact, this merger may be just the opportunity Microsoft needs to lure away Android OEMs. I know if I were at Microsoft I’d be dancing a jig right about now. And, given the market’s luke-warm response to Google’s purchase of Motorola, I’d forget all about acquiring Nokia and focus on partnering with all Google’s scared Android OEMs.

As for Google, if this acquisition does represent a fundamental alteration in their mobile strategy, from focusing on winning market share of paid search, to owning their full value chain in an attempt to out-Apple Apple… well best of luck! They’re really going to need it.

Big news today from Amazon who have quietly released their Kindle Cloud Reader using HTML5. I have to say, I’m so excited for HTML5 and its disruptive potential. But that’s besides the point. Amazon’s latest move in their chess match with Apple for control of the e-book market is both bold and cunning. It’s not checkmate, but I think it is certain to garner fans and good-will, which will be crucial for market-share supremacy. So now, let’s look back at this epic rivalry to shed some light on Amazon’s latest move.

Originally, Amazon viewed sales of e-books as a tool for building their foothold in the e-reader market. This was certainly their purpose in 2007 when they started selling all e-books at the consumer friendly price of $9.99. By keeping e-book prices low, they hoped to encourage readers to purchase their Kindle. This model isn’t new, of course. They actually copied Apple’s iTunes/iPod business model: price the content cheaply to sell the hardware.

Of course, Kindle users will recognize that Amazon no longer sells all their books for $9.99. In fact, their hand was forced by Apple. In order to gain a foothold in the e-book market leading up to the release of the iPad, Apple penned disruptive deals with several publishers in early 2010. These deals gave pricing power back to the publishers in return for assurances that the publishers would not sell their e-books for lower prices through any other retailer, ie. Amazon’s Kindle marketplace. Amazon, caught between the proverbial rock and a hard place, responded by updating their deals with publishers to match the terms offered by Apple. If Amazon hadn’t, they would have seen their e-book offerings decline, and the value of the Kindle and its marketplace would have eventually plummeted.

On a side note, this occurence is now the subject of a class action lawsuit against Apple and the participating publishers. The move from Apple and the participating publishers, if it can be proven to have been planned in collusion (which seems likely to be true but difficult to prove), would in fact be illegal anti-competitive behavior. Amazon doesn’t seem to have had any hand in this lawsuit, but I imagine the lawsuite will not be seen unfavorably by the e-retailer. A good PR campaign around this lawsuit, regardless of the verdict, might allow Amazon to paint themselves as the beleaguered friend of readers everywhere, and Apple as the evil corporate giant only interested in getting market share at any cost.

The next notable move also came from Apple. In June, Apple announced rule changes for 3rd party iOS Apps. Specifically, any App for iOS platforms (including iPhone and iPad) could not include links to external sales sites. Apple’s intent was to force app makers to use in-app sales (of which Apple would get a 30% cut) instead of directing consumers to external sales websites.

This move wasn’t designed exclusively to target Amazon, but it seems to have been focusing particularly on Kindle. The Kindle App is consistently one of the most downloaded for Apple products. Meanwhile, Apple wants their users to buy e-books through iTunes, where they earn a 30% share of revenues. If Amazon plays along with the rule change, Apple will get a 30% share of e-books sold through the Kindle App for iOS. If Amazon doesn’t play along, then Apple has made it less convenient for Kindle App users to get the books they want, and potentially increased the relative appeal/value of the iTunes e-book store. Either way, Apple wins.

In response, Amazon changed the Kindle App: they removed the web links for purchases but they didn’t add in-app “buy” buttons. Instead, less than 2 months after the Apple’s rule change, Amazon released the Kindle Cloud Reader, using HTML5. With this new platform, people can buy books from Kindle, store them in the cloud, read them online or download them to many different platforms, including iOS. Brilliant! And not just because it gets around Apple’s ridiculous attempts to prevent iPhone and iPad users from buying e-books from the Kindle marketplace. It’s also perfectly aligned with Amazon’s focus on becoming a key player in cloud services for digital media. And in my humble opinion, digital media consumption and storage will increasingly be moving to the cloud in the near future. If I’m right, then Amazon is positioning themselves perfectly to be the go-to retailer of digital media, conveniently stored directly in the Amazon Cloud Drive and accessed anywhere via the Amazon Cloud Player.

The match isn’t over. I expect a brilliant countermove from Apple, who are, of course, working on their own cloud services offering and who knows what else. But for now, I’m enjoying watching these giants duke it out.

It’s been a bumpy week for Nintendo. On July 28th, after experiencing lackluster sales of their latest system–the 3DS, a portable gaming device with a 3D screen–Nintendo announced the company’s first ever quarterly loss and a unit price cut for the 3DS from $250 to $170. While the price cut comes as welcome news to gamers and Nintendo fans, the markets didn’t take it quite so well. After this announcement, Nintendo’s stock opened down 4.4% in the US (the stock had closed at 22.17 on July 27th and opened on the 28th at 21.20). Moreover, where the S&P stayed fairly level throughout the day, Nintendo’s stock continued to drop, eventually closing at 19.45 on July 28th, down more than 12% from the prior day’s close.

On July 29th, Nintendo’s CEO, Satoru Iwata, announced a series of management pay cuts, including a 50% cut to his own salary. The move seems to have been an attempt to halt the stock’s decline. And it worked. While the S&P opened down on July 29th, Nintendo opened up about 1.5% at 19.75.

Source: Yahoo Finance Interactive Charts

In many ways, the voluntary salary cuts are a form of bonding, a way to demonstrate the management’s commitment to the shareholders. Nintendo’s management, by decreasing their guaranteed salary, have increased their incentive to boost the company’s performance and thus their own bonus potential. That being said, it seems the Nintendo executive salaries weren’t exactly overblown by today’s standards; Iwata himself earned only an estimated $770k in base salary in 2010 (the average base salary of a S&P500 CEO in 2011 is approximately $1.1 million).

The stock’s decline might also have been arrested in part by Iwata’s more complete explanation around the decision to reduce the price of the 3DS. A price cut may spur sales of the portable gaming device. Increases in sales would help the company achieve scale efficiencies. In other words, the more Nintendo sells, the cheaper it will be to make each 3DS. But it’s anyone’s guess whether the 3DS will sell at this lower price point. The device has increasing competition from the substitute market of high-end smart phones, which are relatively cheap for consumers due to subsidies from wireless phone companies. Nintendo also talked about the need to increase 3D games available for the system, which would certainly increase the appeal of the system.

For me, the real surprise in all this is that Iwata and the rest of the Nintendo management bothered to reduce their salaries. The move makes a good show and does seem to demonstrate a commitment to the shareholders. But if the management were really committed to the shareholders, they should find a good way to use their huge cash reserves. The company currently holds over $13 billion in cash, which is more than half the company’s total market cap. As it is, the cash reserves give Nintendo management a lot of “breathing room” to make mistakes, like with the 3DS. The excess cash also depresses the stock’s rate of return. If management wanted to demonstrate their commitment to the shareholders, they could have kept their salary and either paid out their free cashflow (the cash they don’t need to fund operations and planned future investments) in a large one time dividend or announced sound future investments for the cash reserves.

Don’t get me wrong. I think the voluntary salary cuts are a good bonding device. It sends a powerful message to the shareholders. The message just isn’t as powerful as figuring out what to do with the cash reserves.

Broadcast & Telecom/Tech duke it out in Battle for Bandwidth

The National Association of Broadcasters (NAB) released a study this week in response to FCC plans to conduct a “voluntary” bandwidth auction of the UHF stations 31-51, currently used by broadcast television stations. The new NAB study suggests that the FCC’s plans will impact roughly 40% of all broadcast stations in the US. The report also suggests that over 70 stations in the top 10 markets will permanently go off air under such a scheme: mostly spanish language stations and public access stations. Further, due to conflicts with Canadian broadcast networks, TV markets on our northern border, like Detroit, could go completely dark.

In response to the NAB’s report, the FCC, the Consumer Electronics Association (CEA) and the CTIA-The Wireless Association (a telecom industry organization) have accused the NAB of using scare tactics to block the FCC’s proposal, which could in theory bring in over $30 billion of revenue to help address the nation’s debt crisis. These organizations have suggested that this plan will actually be a boon for broadcast industry, whose members could use the auctions to make a profitable exit from a declining industry. In the same breath, they argue that the NAB’s report “sets up and knocks down a purely fictional straw man,” simply to protect their own business interests. They don’t seem to notice the discrepancy between these two positions. Either the bandwidth auction and subsequent repacking will be good for broadcast, or it will hurt them. And the truth is, it will do a little of both.

The repacking will force greater concentration of channels in the VHF wavelengths, which could compromise the quality of all channels on the system. The closer the channels are in wavelength, the more neighboring channels will disrupt eachothers’ signals. Additionally, the disruption during the “repacking” will be costly to the networks effected. Of course, the FCC’s plans include compensation for some of the costs of the repacking, but it may not include secondary costs like rebranding and public awareness campaigns. Moreover, the industry as a whole will certainly contract because there simply isn’t enough bandwidth to accomodate every current station within channels 1-30.

The industry contraction may at first seem to be a bad thing, but is it really? Certainly the contraction of the industry will harm the NAB and other industry organizations; the contraction may reduce the industry’s lobbying clout, etc. But for individual players within the industry, the auction plan could result in a greatly reduced competitive landscape, and thereby less rivalry for local advertising dollars. Greater concentration in local broadcast stations may increase the pricing power of those stations that remain. Unfortunately, a reduction in channel options available via broadcast might also drive the remaining 10-15% of the population without cable or satelite to finally sign up. That would destroy one of the industry’s remaining competitive advantages over the cable industry.

Overall, this change does seem to be a poor one for the broadcast television industry in the US. But that shouldn’t bother the FCC. The FCC is supposed to look out for the best interests of consumers; they’re supposed to implement policy in favor of the American taxpayers. In some ways, a reduction in broadcast options certainly would seem to harm consumers. Primarily, there would be fewer local TV stations, and thereby less local news and content. The FCC originally granted TV stations this bandwidth explicity to support public interests in widespread dissemination of news information. Now however, they seem to believe this bandwidth would better serve the public interest in the hands of tech and telecom companies.

The most obvious way this helps the taxpayer is in the aforementioned $30 billion of government revenues, which would help alleviate some of the government debt burden. But one can’t deny that extra bandwidth in the hands of the telecom and tech companies could theoretically be used to drive innovation and fuel economic growth. By comparison, the broadcast tv industry has entered a phase of decline, where jobs are being cut and capital gets smaller returns. Supposedly the FCC believes the public interest in news dissemination can still be served with a smaller amount of bandwidth dedicated to broadcast television (ie. most of the main networks would be unaffected). The FCC might presume that widespread availability of the internet more than compensates for the loss of a few channels of TV for the purpose of information dissemination.

The truth seems to be that, although painful, the reallocation of bandwidth would theoretically be in the best interest of the American public. Really, it’s just a shame that the industry organizations supporting the bandwidth auction are coming off as unfeeling d-bags. The CEA and CTIA should try using a little PR savvy and focusing on the economic good, while acknowledging that there are some painful but necessary drawbacks to their plan. If they did, people might be more inclined to take their side.

Yesterday I read a long treatise from Chip Bayers at AdWeek about the problem with internet advertising; or why digital media ad sales suck. It’s a lengthy and somewhat convoluted article, jam-packed with factoids and quotes from indusry big-wigs and sprinkled with some good analysis. There are a few key points I took away from this article, that are demonstrative of some market opportunities open to a cunning entrepreneur or innovative corporation.

Mr. Bayers points to a $60 billion gap in Ad Spending online, compared with TV ad spend. Given the amount of time consumers spend online relative to the amount of time spent watching TV, there should be $60 billion of additional ad spend in digital than there is currently.

The easiest explanation for this gap is simple supply/demand economics. The internet offers virtually limitless supply of advertising space. There is so much excess supply, that the price of advertising online is necessarily devalued.

Another issue that Mr. Bayers points to affects the demand side of this imbalance. Because there is so much excess supply, digital ad space is cheap. The low cost attracts low quality advertisers, like debt relief agencies and weight loss scams. The major advertisers like car manufacturers and CPGs don’t want their brands represented next to the chaff that currently advertises online. Thus, many of the biggest spenders in advertising are not buying much online ad space.

Much of the cheap-o advertising you see on About.com and similar content farm sites is serviced by digital ad networks that are highly automated. Many of these ad networks place ads based only on a rough cpm and how many people the advertiser hopes to reach. Others are more targeted and use cookies to track users web history to allow advertisers to directly target users with specific search histories or patterns. But even these techniques for targeting do not address the basic issue of excess supply that underlies deflated digital ad prices.

From my perspective, Mr. Bayers has missed one essential issue with the current digital advertising landscape: the excess supply and low CPMs–resultant from ad networks–have driven many high quality digital content sites to keep their ad sales in-house. Ad networks are particularly unappealing to sites with high-quality content and desirable audiences. These sites have high-quality digital ad space and fear an ad network would force lower CPMs and thus lower quality ads. Low quality ads reduce the brand value of the site, by tarnishing the consumers perception of the site.

So we see that many high-quality sites don’t use digital ad networks, lest it harm their brand value. And we know that many high-quality advertisers don’t use digital ad networks because they only want to advertise on high-quality sites, lest it harm their brand value. It seems to me there is a big opportunity here for a differentiated ad network! If a digital ad network were created specifically to connect high-quality advertisers and high-quality content sites (using a differentiation strategy) it could ease the frictions which have created this $60 billion advertising gap and capture a lot of that value in the process.

Of course, this raises one huge question: why haven’t the existing ad networks already done this? Or if they have, why doesn’t it seem to be working? I don’t know the answer to this. Mr. Bayers would suggest it is because the successful ad networks are owned by tech firms who abhore advertising. So instead of focusing on sales and making the customer happy, they just build algorithms. I don’t know if this is true, but if it is, this prejudice has cost them dearly. They’ve left a lot of money on the table!

MoviePass, a San Francisco based start-up, has been a hot topic on every media blog this week, and inspired outcries of jubilation from moviegoers everywhere. The concept is simple: MoviePass is essentially Netflix for theaters. You pay a monthly fee and get to go to unlimited films in theaters. The start-up announced the launch of their beta test in San Francisco earlier this week. If you don’t know what I’m talking about, I recommend reading “All-You-Can-Watch MoviePass Brings Netflix Model to Theaters,” on Epicenter.

When I read this article on Wednesday, I thought: “Awesome! But how the heck did they get the exhibitors to agree to this?” I wanted to know the terms of the deal(s) MoviePass had made with the theater chains. Well, apparently, they skipped this step entirely. It came out today that MoviePass had announced their launch without getting consent from “participating” theaters. When exhibitors discovered they were supposedly “participating,” they publicly denounced MoviePass. As a result, the MoviePass beta launch has been delayed.

Now that the cat is out of the bag, I can’t help but wonder whether MoviePass’s management is brilliant or completely incompetent. When MoviePass announced their launch, either they naively thought exhibitors would accept the new system, or they knew it would fail but expected the press response to help them force a dialogue with the heretofore uncooperative exhibitors.

If the management believed their launch would be successful without the buy-in of the “participating” theaters, then they must be unforgivably incompetent. Yes, MoviePass had agreements with internet ticket vendors who probably would have paid the theaters for the MoviePass users. But without the explicit buy-in of the theater chains, staff at “participating” theaters wouldn’t have any proceedures for accepting MoviePass “tickets.” The exhibitors wouldn’t have any process by which to account for the number of MoviePass users or get reimbursed for the cost of the seat.

If, on the other hand, the MoviePass management knew the launch would fail, I’d call the move brilliant but risky. Even though MoviePass has a strong value proposition for the entire film industry, it can be hard for a start-up to be taken seriously by a major corporation, like AMC. The enthusiasm generated by MoviePass’s announcement offers some proof of concept. Additionally, MoviePass has ensured the attention of giant exhibitors by listing them as “participating theaters.” But this stunt could backfire. It could inspire the animosity of every exhibitor operating in San Francisco, without whose cooperation MoviePass’s business is doomed.

Of course, MoviePass could have other reasons for their pre-mature launch. Perhaps they’re preparing for another round of financing and thought a stunt like this could increase their valuation? I really don’t know. But I earnestly hope it isn’t incompetence… because I would really love to see MoviePass make it to New York!

Publishers Buoy Bookstores

On Sunday, Media Decoder’s Julie Bosman reported on a promotion from Algonquin Books which allows consumers to get deals on e-books if they buy hardcopies in specific brick and mortar stores. The publisher’s hope is to buoy their endangered compatriots by driving consumers to physical book stores.

It seems publishers are trying to boost sales of physical copies by giving away digital copies. To people of my generation and younger, this tactic seems almost willfully ignorant of the direction media is heading. Sales of e-books are growing at an astronomical rate: around 200% growth in 2010. And while many of us, myself included, love the sensory experience of reading a real book… they’re just not very practical.

In fact, book stores (which again, I love) aren’t very practical. They’re not practical for the consumer or for the industry. To the consumer, it’s so much easier to download an e-book than to drive to the store, scour the shelves, ask the snarky sales associate for help, find out they have to put it on order, and finally come back a week later to actually purchase the darn thing. For the industry, the physical locations and the sales staff are extra expenses that arguably add to the cost structure of the industry without adding to their value proposition. In fact, typically publishers give book stores a 50% discount on the retail price of the book. That means, half of the cost of a book I buy at Barnes and Noble, for instance, is for my shopping experience! (Granted, Amazon gets a 30% share of their ebook sales.)

So why do publishers love physical book stores?

One popular explanation is that these old publishing houses are just stuck in their ways and respond defensively when threatened by change. I find this explanation to be lazy but also a little bit true. On the one hand, business models are hard to change. The publishing industry has been operating under one model for decades and they can’t completely change their ways over night. But because their old model is threatened by these shifts in consumer behavior, publishers must change or die. And they are changing! The fact they are using digital copies to lure buyers to brick-and-mortar stores shows that they are attempting to confront the issue. However, this response does seem defensive, rather than pro-active.

The Media Decoder article touches on one major reason why publishers would support book stores:

“If physical bookstores continue to disappear, publishers worry, their books will not have an opportunity to be discovered by customers who wander into a store without knowing what they want to buy. “

At first glance, you might be confused by this quote. Surely consumers are equally capable of discovering books online without knowing exactly what they’re looking for. The real problem is that the internet is a lot more crowded than a brick-and-mortar bookstore. Big publishers have a big advantage in book stores. Because of their existing relationships and their large portfolios, big publishers are infinitely better positioned to get their books into stores (and displayed in prominent positions) than an independent publisher. In other words, book stores are a key marketing tool and a source of advantage for big publishers.

But I think publishers are fighting a losing battle. They should look at the music industry and the state of the major record labels for a glimpse into the future. Brick and mortar stores are all but dead, because the only thing they added to the industry was extra cost. Yet the major labels have survived, despite the proliferation of independent labels. But they fought the digital revolution tooth and nail and it cost them dearly. Publishers should learn from what the labels did wrong and what they did right and start embracing the higher margin digital publishing business.

For years I have been frustrated that MSOs haven’t invested to allow a more interactive experience for advertisers.

Imagine: you’re watching TV and see an ad for something you’re interested in. Then an icon pops up on your screen prompting you to push a button on your remote. You do and immediately you get a navigation panel on your screen, telling you more about the product and maybe even letting you purchase the product. Or perhaps you get coupon codes or find local stores where you can buy the product.

Well, MSOs have dragged their feet because, frankly, the business model is new and its not entirely obvious how this type of feature would be monetized. Once they invested in the development, could they reap the rewards?

After years of knowing this opportunity was within their grasp, it seems they have been beaten to the punch, by Web 2.0 darling Shazam. On Tuesday June 22nd, Shazam announced a $32 million round of financing from a few venture funds to support their meteoric growth of approximately 1 million new users per week, and to build out their fledgling TV offering: “Shazam for TV”.

With Shazam for TV, you can now use your Shazam app on your mobile device to “discover” additional content and promotions from participating content providers and advertisers. Early partners include MTV Networks, NBC Universal, Discovery and Old Navy. When participating content is on your TV, simply point your mobile device at your tv and Shazam will recognize the audio track. Then shazam will beam extra content or promotions (whatever the partner has provided) to your mobile device.

Shazam has said their conversion rate, from “discovery” to purchase is 10%. Or at least, that seems to have been the case for the Old Navy campaign they ran back in February. Presumably, this metric was obtained by tracking how many people used the promotional codes and/or how many people navigated to the OldNavy webpage directly from the Shazam promotion. Either way 10% conversion is pretty spectacular. It seems the business model is more than viable, its enviable!

Well, I’m betting the MSOs are paying attention now. How long before they throw their hat into the ring is anybody’s guess, but something tells me it won’t be too long. Shazam had better take full advantage of their head start, because the giants are coming!