Hacker News new | past | comments | ask | show | jobs | submit login
Commoditize your complement (2019) (gwern.net)
128 points by harperlee on Dec 19, 2020 | hide | past | favorite | 80 comments



> “There’s An App For That” is why you buy an iPhone—but it’s Apple with the $930 billion market cap & not the app developers.

Oh yeah? Tell that to Facebook, Netflix, and the YouTube part of Google. Tell that to Twitter, Snapchat, and TikTok.

There’s another principle at play here, and I think it’s an important one. Pleasantly, it also operates as a kind of Golden Rule for economics in platform development (and has legs at a wider scale, too, but I won’t cover that).

Specifically: Create more value than you capture. Apple of course captures some of the value of their ecosystem. So do other platform providers. The thing that distinguishes a healthy platform ecosystem from unhealthy ones is whether or not more value is on offer to platform adopters than the platform itself captures. Unsuccessful platforms generally fail at this, either because they don’t provide enough intrinsic value themselves (and therefore “end users” don’t benefit) or because they don’t make it profitable to operate on the platform (they try and capture too much of the value for themselves, and thereby strangle their own success).


Facebook, Netflix, Youtube/Google would be in similar positions even without Apple. They are not "app developers", they are software developers. Snapchat and Tiktok are good examples, but, what are their market caps? Add them all together and I think they won't be $930 bln (whereas e.g. if you add together all companies that depend on Amazon, the valuation would most likely be much higher than Amazon's; that should be the normal, for a platform).

Apple absolutely attempted to commoditize apps/ establish 1-to-5$ as the "normal" prices.


When the iPad launched all of Apples own unbundled productivity apps were listed at $10 each so we know for a fact that this is not true. They tried very hard to establish a market with sustainable prices for premium apps.

I think it’s just unreasonable to try to do that with the phones back in 2008 though, with the small screen size and relatively low resources of the very early devices high prices for phone apps just wasn’t going to fly.


Whether or not $10 is "sustainable premium price" is more than questionable. They tried so much, that they don't yet have free trials and paid upgrades for apps.... (like, true support, not the in-app-purchase workarounds)


Those apps existed before Apple had power.

That's important because Apple couldn't have started their ruse by moving in and taking away FB's power from the start.

If FB was weaker, Apple could be moving on their turf now by requiring a cut of their money.

You're missing the bit about power and value chains here: if Apple can take all of your surpluses, they will, it has nothing to do with how much money you make.

Consider also the vast distortions happening right now: and digital player that wants to charge a fee - has to give 30% to Apple. That's huge.

The alternative is advertising - for now - Apple doesn't go after that.

Many businesses to not run on huge margins - if you could eke out a 5% margin living under Apple's 30% cut, and switch to ads (assuming same revenue/gross margins), then you'd 5x your revenue.

Of the monopolizers, Facebook is the least worst - nobody needs them, and they're not a primary source for anything.

Apple's closed system is the worst, followed by Google's incumbency of search, propped up by their control of Android, Chrome etc. Thirdly is Google's abuse of Search to screw over competitors in near fields.


SnapChat and TikTok were both created well after Apple achieved its current market position. Not to mention the fact that I have only picked a few examples from a few categories. What will you say when I add Uber, Door Dash, AirBnB and others to the list? Again, I'm barely scratching the surface here.

You really can't escape the fact that applications built on top of Apple's platform are worth, in aggregate, multiples of Apple's market cap. I'm not an Apple apologist, this is just obvious math.


Companies that use Verizon network are bigger in aggregate than Verizon itself! Look at all the value Verizon has created!

Also - neither Snap nor TikTok charge money for their apps, it was never an option due to Apple's monopoly.


Right, the App Store is worth half a trillion a year just in raw billings and sales alone. That’s double Apples own revenue. The value of free services on top of that, where the value is captured in other ways which is most of the companies you listed, is likely of the same order or even significantly more. If so that all adds up to a decent multiple of Apple’s own revenue. Maybe 3x to 5x depending on what assumptions you make.

This is why competing phone platforms can’t get any traction against iOS and Android. You’re not just competing with Apple and Google, you’re competing with the entire ecosystems of companies and services on those platforms which combined are worth far more, and have aggregate resources far exceeding those of even Apple and Google.


> Specifically: Create more value than you capture.

They can ensure this by copying the most successful apps in the App Store, down-ranking the original authors, and repeating this game.


Apple has tried to put Netflix out of business and so far isn't succeeding. They sort/of tried to put Facebook out of business and failed so miserably they may not get over it for another 5 years. They haven't even been able to beat Spotify, where they arguably should be dominant.

Apple is pretty pitiful as a services company. Yes, they're trying, and they may eventually figure out whatever structural issue it is that's keeping them from being successful here, but I think for every significant business Apple has successfully cloned (Evernote, maybe? What else?) you can come up with at least another where they haven't been able to despite trying, and another 2 or 3 where they haven't tried (yet).

Yes, many small startups watch WWDC with fear in their souls that Apple is about to disintermediate them. It's worth spending some time to understand the differences between those companies and companies that Apple has failed to compete with despite trying hard. I'm sure some obvious differences will jump out at you if you're willing to abandon the position that Apple is an unbeatable juggernaut.


According to Drew Houston, Steve Jobs offered to acquire Dropbox and when refused said he'd compete with them. iCloud is right about there at this point, I know people who turned off Dropbox and just use iCloud (or the Google offering).


So I guess Apple really put them out of business, right?

Oh wait, no, they’re worth $7.5B as of Friday. /s Definitely Drew should have taken the money and run. I mean, he’s only a single digit billionaire!



> Headline: Sun Develops Java; New “Bytecode” System Means Write Once, Run Anywhere [WORA].

> Sun’s enthusiasm for WORA is, um, strange, because Sun is a hardware company. Making hardware a commodity is the last thing they want to do. Oooooooooooooooooooooops! Sun is the loose cannon of the computer industry. Unable to see past their raging fear and loathing of Microsoft

I've never heard this. One story I've heard is that they were worried about the growth of Windows-only software. Another is that they thought they could make money off of Java.


Yeah, I've never understood this part of Joel's analysis.

Sun were a hardware specialist. They wanted commoditised software, i.e. fungible, cheaply available everywhere in the same form, so that it is not a differentiator, as it is not an area of differentiation that they believed they'd win on, unlike hardware.

Further, they had a reasonable fear that software would more and more be made to run in only one place - Windows - which would lock them out of large parts of the market (like a car company whose cars ran on a fuel available in only a small number of remote locations).

So, they did their best to commoditise software with write-once run-anywhere. It fits perfectly with the strategy Joel outlines. Now all the software vendors are directly competing, as they can't use hardware tie-ins, and producing more software (write once), and all this should drive down software prices, and thereby leave more money on the table for superior hardware, which will become (they hoped) the key competitive advantage in the market-place.

Clearly, they lost anyway. As far as I can tell, that was because they lost on their strength: the market wanted commodity hardware (ever improving with Moores law, anyway) for vast data centres, rather than more capable and expensive hardware.

Yeah, if they could have built amazing software, like hardware producing Apple with their popular consumer OS and other software products, then they could maybe have won, but then if they couldn't win on hardware, their strength, why should anyone expect them to win on software?

(Okay, Java alone proves they had serious software chops, and I hear much of their software was good, but clearly they didn't think they could compete with the entire Microsoft ecosystem of software (and then there is Linux, FreeBSD, et cetera...), and I tend to agree).

Basically, they bet on hardware in an age of software. This wasn't a failure to commoditise their complement, but a failure to choose (to the extent they had a choice...) the right side of a complement divide.


Sun was destroyed by commodity Intel hardware on one side and free open source software (Apache/Linux/GCC, etc) on the other. In the very early 90s Sun workstations and servers could do things x86 boxes couldn’t do. Once that changed they were doomed.


That makes sense.

What were these unique 90s features?


On desktops, Sun workstations had graphics capabilities PCs could only dream of.

Most multi cpu x86 systems were file servers built for parallel I/O. Although it was technically possible to build symmetric multiprocessing x86 systems, and some vendors did so, the PC architecture specifically wasn’t conducive to it, software support was very limited and overall they didn’t work very well. Mostly they were hacky dual cpu systems, meanwhile Sun was selling highly efficient 4 and 8 cpu systems with high bandwidth architectures. The first SMP system I used was a Sparcserver 630MP in 1992. We had some x86 boxes running Xenix but they were pitiful in comparison.

Sun didn’t need to market expansion cards as “plug and play” because how else would they work? On x86 it was a wondrous and largely mythical concept deep into the 90s.

64 bit SPARC systems were available 8 years before x86-64 although that was mid 90s.


The stories you heard do not contradict the analysis from Joel, do they?

"They thought they could make money from Java": yes, but if you are a hardware company, you shouldn't push for the development of software that commoditizes your server.

"They were worried about Windows-only software": if you make money from hardware, you fight Microsoft by integrating/developing/fomenting software that can run exclusively on your stack. IOW, do what Apple did and still does.


i think sun tried to sell more servers and storage systems by giving free access to the tools (JVM/JDK) that would simplify enterprise software development - without having to worry about core dumps or very long compilation times. It might be that Joel was too focused on Microsoft in order to see that.

Also SUN was selling licenses for very expensive J2EE stuff (remember EJB? That's the stuff that was replaced by spring) Giving free access to the JDK would make sense, as making the complement cheaper to create demand for EJB (that would have worked if EJB would have been easier to use)

Java was initially focused on small devices, but that was not it's main use case at the turn of the century. i think phone hardware was too limited for Java until a few years later, when Sun was no longer around (actually we had the epic Oracle vs Google case, which might have been a Sun vs Google case - sans acquisition by Oracle)

It may well be that Linux became good enough for this market segment and thereby turned server hardware (i.e. Sun) into a commodity, but that's a different story.


The original target for Java (Oak) was set top boxes. Arguably Sun was trying to commoditize the set top box industry so that they could sell expensive servers to cable operators. Then someone decided to open source the result (engineers aren’t always great economists / financial analysts).

Amusingly in the 25 years since, set top box manufacturers have commoditized themselves very effectively.


> Then someone decided to open source the result

The Sun JVM was open-sourced in 2006-2007 [1] - 10 years after the introduction of the platform and after it was already wildly popular.

[1]: https://web.archive.org/web/20140527220942/http://grnlight.n...


It seems that computer hardware, CPU tech, is currently in a phase of anti-commoditization.


Definitely. Microsoft is apparently getting in on the customized silicon game too. Frankly I wish we had seen this at least ten years ago.


Hardware and software goes hand in hand. A custom CPU would also need custom software. The engame is probably "you can only run this in our cloud", but its open source you say, yeh before they added the proprietary code, so if you do run the code yourself you can't use half the features and performance will be degraded, and you won't have access to the community/plugin ecosystem.


There is no way back to this kind of lock in. If anything, businesses expect the opposite: maximum ability to transparently move their load between various cloud providers and their own on-prem machines.

Perhaps specialized hardware (GPU) is different but not generic load.


> If anything, businesses expect the opposite: maximum ability to transparently move their load between various cloud providers and their own on-prem machines.

Is this actually true? In my experience, being technology-agnostic is something that engineering orgs prioritize somewhat, but is fairly low on the list of priorities for business orgs, if at all. A lot of the most successful business tech has high degrees of lock-in.


When everybody is doing AI and deep learning, then everybody wants a GPU. Good for Nvidia.


Aren’t their chips from Samsung though?


I’m not convinced this strategy can help create monopolies. If I can help create demand for my complement then that would make sense to boost my product but effectively creating a monopoly with my product is another matter.


I think it’s better to say it creates an oligopoly among the few strong competitors at the chokepoint non-commodity layer.

Various smartphone developers made app ecosystems into commodities, so those developers have a very, very large market share of all smartphone taken together.

But I do agree that this strategy doesn’t, on its own, create a monopoly for one member of the chokepoint layer vs another. It helps by allowing businesses in the chokepoint layer to be price-neutral to other layers above or below them that are commodities (those layers face so much price competition that they have no negotiating power on the other layers) and focus on monopoly strategies against other competitors in the same layer, free of concern about needing to compromise with other layers.


This is an interesting perspective however only the most extreme examples given are convincing enough to be as truly adversarial as the author implies.

Using the articles very own examples, hardware seems to be attempting to commoditize software, and software seems to be attempting to commoditize hardware, the chain is bidirectional and long, there seems to be no clear rules. In the example of Sun and Ximian, it can be viewed as commoditizing or directly funding a non existent yet required comunterpart due to lack of in house expertise.

I suspect the reality of the majority of less visible chains of compliments are closer to symbiotic than parasitic - that intentional and extreme commoditization are the adversarial exceptions which are so large and monopolistic as to bias the landscape... even then as other commenters point out, those relationships don't appear to be sustainable long term, due to longer term negative side effects or a changing landscape.


> hardware ... commoditize ... software, software ... commoditize ... hardware ... no clear rules

we all win.


But it's not the reality though, ultimately software doesn't want to commoditize hardware, it wants to have choices, sure, but only for itself, so it can lock people in to a specific choice to sell it to people too and with high margins. Vice versa is true too, hardware wants to build software ecosystem around it, but in a such way that uses only its hardware, not other hardware. This is why wintel is a thing.

Only from a quick glance it appears that we all win, but really none of us can win from this. Worse, megacorps don't really compete with each other, their view on competition is pretty perverse, they "compete" as long as such "competing" doesn't eat into juicy high margin profits that exist only on monopolized markets, so it's more like dividing markets and forming oligopolies, but not commodotizing each other.


Companies do this, but I'm not so sure smart is the right adjective. Better positioned, or risk adverse might be better description.

Effectively, this argues for seeding a new market (e.g., IBM and enterprise OSS). Makes sense. But what's to stop _anyone_ else from getting into that same business? And doing it better? Maybe you create an initial advantage, but for how long? Game consoles come to mind.

It's an effective strategy. But you have to be spot on in the execution, and always be looking over your shoulder. Wouldn't a Zero to One / Blue Ocean mindset be smarter? Yes, more difficult. But none the less the far bigger win.


Well, in a way all businesses want to become monopolies. And hence they try to build moats around their core businesses to prevent competitors from entering that market, or in the worst case, causing the entire market to become commodified. The form of that moat varies depending on what said core business is. One version is even mentioned in the article:

> An­other way that I like to ex­press that is “cre­ate a desert of profitabil­ity around you”. I once had a strat­egy pro­fes­sor de­fine the Google busi­ness model some­what like that, where “Google tries to make every other busi­ness around it free or ir­rel­e­vant”…A desert of profitabil­ity shifts con­sumers to you, and keeps com­peti­tors away.

> Wouldn't a Zero to One / Blue Ocean mindset be smarter?

So if you're about to start a business, should you create a new market (becoming a monopolist in that market), or should you enter some existing highly competitive and commodified market? The answer sounds obvious. Once you do that, then the next question is how do you maintain your monopoly position within that market? See above.


> But what's to stop _anyone_ else from getting into that same business? And doing it better?

You mean your main business or your complement?

If it's the complement, you commemorate, and try to push more people at entering there and doing it better.


It's about pulling the ladder out from under you, so you grow with the commodity and control risk of upstarts. It's interesting to compare something like how aws/azure/gcp (IaaS and up) commoditize snowflake/databricks (data PaaS and up):

* free complement: drive to zero so you go up, and competitors relatively weak individually. A $T corp like aws doesn't really care if any individual sw vendor (OSS, ...) gets big b/c they're still small wrt aws's offerings. The few emerging big winners like snowflake/databricks makes aws more useful to customers and helps the on-prem -> cloud shift: aws and friends are focused on 30%+ quarterly growth, which is nuts at their scale. They can likewise slowly decrease the ISV margin at their leisure for the non-oss parts, and for now use them to keep juicing the growth.

* pricey moat: the hyperscalers keep reinvesting monopoly-margin-driven profits to be even more untouchable. $T corp building their own chips, HA, etc., layers. It's tough for a YC startup to decide to compete w/ AWS head-on - hw/sw codesign takes a lot of time & $, and getting even just sw would be hard. it took huge amounts of VC $ for snowflake/databricks to get where they are, and yet, it still probably doesn't make sense for them to compete w/ AWS/gcp/az via custom hw etc. (TPUs ..), vs. riding commodity hpc (e.g., databricks achieving its AI vision by reselling GPU hw/sw controlled by Nvidia+Google, unlike their past control of Spark). IBM buying redhat gives a sense of how hard it is to buy in at this point.


It seems more like a side effect than a market strategy. Are there mixed strategies? Ie even car companies tend to look askance or even void warranties for using after market parts.


> Makes sense. But what's to stop _anyone_ else from getting into that same business? And doing it better? Maybe you create an initial advantage, but for how long? Game consoles come to mind.

You have to keep in mind that the natural outcome of capitalism is to create a forsaken hellscape of competition where individual companies struggle to make margins good enough to survive.

In this case, the truism is to commoditize your complement. Companies have an incentive to push as much capitalism as they can into things they don't make money on, and keep the capitalism away from their main products as long as they can.

So eg Google don't want to commoditize maps, because they're happy having a monopoly on mapping services. Google's competitors want to commoditize maps, because by doing so they undercut Google and make their own cloud service / phone OS / office suite / other all-encompassing orwellian service more competitive.


When I saw the title and the comments before reading the article, I thought to myself "what Joel wrote[1] and I read almost 20 years ago made a ton of sense"[^]. Then I looked at the article, and realize that it opens with a link to Joel's article and then lists a bunch of strained examples (editors vs IDEs).

I would recommend that everyone read Joel's article[1] first. Note that increase in demand means "the value to the consumers of each and every unit of your good increases" or "at each price more units will be sold" ... That is, demand curve shifts to the northeast. Which is different from an increase in quantity demanded in response to changes in the quantity demanded.

Note also that while we tend to think in terms of individual companies' products, the demand for the output of the entire industry tends to increase when the complements become cheaper.

A great example of this is indeed Windows. Glossing over the details (such as Epson's ESC/P etc), prior to Windows, you knew that if you wanted to produce hardcopies with bar charts etc, you knew you had to have printer that worked with Harvard Graphics[2]. Post-Windows, you could make a printer and a driver for Windows and all software could use it. Which led to increase demand for both printers and software that uses printers. Harvard Graphics' real advantage was in being able to produce output on paper or physical slides. Once again, having Windows meant projectors could be used without worry that you needed to use a special version of the software with special support for the particular projector you had. That helped grow the projector industry and cut the hassle of having to have hard copies of the presentation (which you could not correct if you noticed a problem at the last moment).

So, when I read the linked post, I thought it is best to be avoided in favor of reading Joel's post again. Joel's post is not without flaws either:

> If you can run your software anywhere, that makes hardware more of a commodity. As hardware prices go down, the market expands, driving more demand for software (and leaving customers with extra money to spend on software which can now be more expensive.)

As an economist, I would have noted that it is not just that consumers have more money and an individual firm can extract more of it ... it is that as a result of a complementary good becoming cheaper, they value the thing you are selling more and therefore they are willing and able to spend more on the thing you (or other firms in your industry are selling).

Next, we should think about monopoly power and availability of close substitutes. :-)

[1]: https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/ [2]: https://en.wikipedia.org/wiki/Harvard_Graphics

[^] For background, I am a Ph.D. economist, and taught micro at various levels, and I develop software for a living


Do you feel like this article is straining at bringing monopolies into the explanation? I think the “commoditise your complements” holds regardless of whether the business is, or wants to be, a monopoly. Joel’s article and the Hal Varian work don’t seem to be talking about monopolies. (Caveat: I didn’t read all the sources in detail)


Things work in similar ways however the market is organized. But a monopoly will have a much easier time extracting value from the activity than companies in a competitive market.


Except that the definition of monopoly power is that the company is facing a downward sloping demand curve and that it is operating on the elastic portion of it.

In general, that provides an incentive for other firms to produce a close substitute at a slightly lower price and eat away at your profits unless the government is there to stop them. Monopoly power cannot be sustained without government intervention. In addition, monopoly pricing incentivizes consumers to look for substitutes.

If you look in real life, you will see that monopoly power is sustained through restrictions on who can sell stuff and from whom you are allowed to buy.


> Monopoly power cannot be sustained without government intervention.

Hum... Monopoly power cannot be sustained without barriers to enter. There's nothing saying that those barriers must come from a government. There are plenty of examples where they do, and also plenty where they don't.

Anyway, I don't see how any of that is an except to my earlier point.


> a monopoly will have a much easier time extracting value

That's my point. In the static picture it might seem so, but in the dynamic picture the extracting of the value incentivizes competitors to enter and consumers to seek substitutes.

Would you mind mentioning an example of long lasting monopoly power (which is different than just being the sole seller of something -- after all, everyone is the sole seller of something)?


There would not be a monopoly if there was close substitutes.


At time t0, there may not be ... but the exercise of monopoly power incentivizes others to produce substitutes and consumers to seek them.

> In addition to the radio relay services, MCI soon made plans to offer voice, computer information, and data communication services for business customers unable to afford AT&T's TELPAK service

E.g. MCI vs AT&T[1]

[1]: https://en.wikipedia.org/wiki/MCI_Communications#Founding


Interesting read. I guess you can view what Amazon has done in this light. Provide the best solution to taking possession of things. The complement being the supply chain that marketplace aims to commoditize.


I don't think this is true, at least not the way you think of.

Amazon hasn't really commoditized the supply chain. Its approach is closer to the Disney / 90s Microsoft monolith: lots of vertical integration, redefine processes at every level, etc. Commoditizing the supply chain would be if they hired intermediaries and encouraged competition between them, but they mostly do things in-house.

(well, except for hiring, but the commoditization of jobs unfortunately isn't a new phenomenon)

On the other hand, maybe it can be argued that Amazon is trying to commoditize the physical marketplace itself. It notoriously makes very little money on physical goods, and most of its profits come from its cloud division. But I think we're getting a little far from the concept described in the article.


So this is why tech companies do so much open source work?


It's certainly one of the reasons. (There are also others, like building a good reputation to help attract and retain talent).

For example, in 2009, in the world of physics engines, Havok (acquired by Intel) and PhysX (owned by Nvidia) were pretty much the two notable market leaders. Physics acceleration was becoming a big deal in games and an important complement to GPUs, and AMD/ATI found itself without a horse in the race. So did it develop its own proprietary physics engine? No, it threw its weight behind the open source Bullet physics project, to "commoditize their complements". They don't need to make money from engine licensing; they need to present a viable alternative to PhysX to keep Nvidia from getting ahead. For that, the lower the barrier to entry (free) the better, and the lower the barrier to contributions (libre) the better.

Now we're seeing Facebook (big stake in VR) contributing to Blender (3D content creation, and so a complement to VR). And so on!


I don't think that open source work is a commodity. On the contrary, when well done, it functions almost as standard bearing, PR and legislation. Nothing about that says commodity to me. In fact, it's the kind of differentiated strategic work that's quite far away from commodity.


Commodity is not quite the correct term as in "a basic good used in commerce that is interchangeable with other goods of the same type".

What people want to say, I assume, is that building on Open Source is no competitive advantage because your competition can easily adopt it as well.


I’ve often thought about this in two areas:

- “full stack development” attempts to commoditize software engineering within a firm, so that complements (product management, designers, political managers) are seen as having more value.

- various cloud vendors trying (so far mostly failing) to make software ecosystems commodities so that renting hardware from them to run the software can extract higher rents. Things like open sourcing Tensorflow & Keras, then turning around and selling a bunch of ML optimized GCP products - commoditize the specialist ML software labor, which is the complement to the cloud vendors offerings.

Software engineers really have to watch out for these situations and walk away from places trying to treat them as a commodity.


The cloud providers have no desire for higher rents. Andy Jassy the CEO of AWS just said in his keynote that only 4% of enterprise workloads are on any cloud provider.

The cloud providers want market share. Right now, they are competing against non consumption. AWS has never raised prices on any of their services. In fact, they are aggressively trying to reduce costs by creating their own processors purpose built for their needs and passing them on to consumers. I would think the same is true for Azure. Amazon and Microsoft both know that the minute they raise prices, it’s going to scare companies away.

Who knows what the heck Google is doing with GCP.


They're also running up against significant competition in the form of extremely inexpensive local hardware.

Ten or twenty years ago a company with a thousand employees typically needed multiple racks full of servers, in some cases to handle the load but in many cases just because each separate service would have its own physical machine.

Today all of that can fit on a pair of local physical machines hosting virtual guests. A single machine can have over a hundred cores and terabytes of memory. Moreover, a physical machine can be amortized over ten years provided you have a load that doesn't vary significantly over time. And because it's such a small number of physical machines, you no longer need exotic local power and cooling solutions.

Cloud providers are already more expensive than this in many cases. They need every cost advantage they can get just to be in the game.


Hardware is cheap. People are expensive. Besides that, procuring resources with your cloud provider is simply a matter of writing a yaml file. Not to mention the lack of an upfront investments and only paying for your resources you need instead of having hardware that you are spending money on because you have to have enough hardware to handle peak load. You would be amazed at the amount of resources you can buy at the cost of the fully allocated salary of one engineer.

And yes you can buy hardware. But can you run a data center in multiple regions? Besides that any cloud provider offers more than just a bunch of VMs. AWS alone has 260 services with an entire team of people keeping them patched and optimized. I don’t keep up with Azure as carefully. But this isn’t meant to be an Azure vs AWS comment. I just don’t know Azure.


> Hardware is cheap. People are expensive.

Except that you still need the people, because most of the labor isn't putting the hardware in the rack, it's managing the software which you have to do regardless of where the hardware is.

> Besides that, procuring resources with your cloud provider is simply a matter of writing a yaml file.

That is no different than it is locally.

> Not to mention the lack of an upfront investments and only paying for your resources you need instead of having hardware that you are spending money on because you have to have enough hardware to handle peak load.

But hardware is cheap, remember? And most companies don't actually have large load variations.

> But can you run a data center in multiple regions?

Obviously yes. Any company of non-trivial size would have multiple sites and could locate a host at more than one. This doesn't even necessarily raise the price, because you already need enough machines to provide redundancy, so locating them at different sites doesn't even require additional hardware, only locating some of the existing hardware at other sites.

This is also mostly overrated for companies smaller than that, because cloud providers have had company-wide outages at a frequency not all that much higher than site-wide outages for sites that have a reasonable level of redundancy.

> Besides that any cloud provider offers more than just a bunch of VMs. AWS alone has 260 services with an entire team of people keeping them patched and optimized.

This is only relevant if you're using 260 different services and not just a bunch of VMs, and plenty of companies are using just a bunch of VMs.


As a manager of a team of application product developers I can tell you, the headcount cost of ops teams & the time cost of taking people whose job shouldn’t involve vm provisioning overhead but nonetheless does are both huge compared to cloud services. In cloud tooling, my team of people all with zero experience doing vm provisioning can get production systems up, add logging, add alerting, add networking, etc., all very easily or with just low touch overhead from teams that manage best practices or compliance. Creating the same internal developer tool experience with data centers is SO expensive and requires a major headcount investment.


It's always shocking to see how inefficiently some companies are operated.

The things you're describing should take a single individual a matter of seconds for a system which is already in operation, and a one-time cost of a few hours to set up at the outset (i.e. once or twice a decade). If it takes significantly longer to do locally than it takes to input into the cloud provider's interface, something's not right.

I can tell you where most companies go wrong here though. It's in excessive specialization. If you put separate people in charge of provisioning, networking, logging, etc. then you create a ton of friction to do anything because you need five different humans to touch it and they all have to coordinate. One person can do all of those things, as you've learned when one person does it interacting with the cloud providers. And when one person is doing all of it, it takes only seconds to do.


I listed all of the services we used across five environments and most across three availability zones. So one person was going to manage what on prem would be roughly 200 VMs/services and make sure they stay patched, the OS stays updated? Locally, a lot of those services would run in a cluster for availability.

There is no company on earth that has one person managing an on prem implementation of that level of complexity.

One person does it in a cloud environment because they aren’t managing hardware, patching, etc.


> So one person was going to manage what on prem would be roughly 200 VMs/services and make sure they stay patched, the OS stays updated?

You're talking about the applications, i.e. the guests, not the hosts. It should be completely reasonable to expect a single person to be able to manage the hosts, including the networking for the hosts, the backups of the guest images by the hosts, etc. The services themselves may be arbitrarily complicated and require a large number of people to configure and manage, but that's the same as it is with cloud providers.

There is also a simple way to handle guest OS updates in most cases. Turn on automatic updates, and keep automated VM snapshots in case one goes bad.


Many devs don't want to do any of that, they would rather have everything run in docker and not worry about any of the above (which if you can afford it at scale it can make sense, personally I'm not of that ethos and try to stay away from companies like that, as its great for me personally to work in a place where the devs can jump in anywhere rather just be tied down in their silo).

I think its easier for some companies to just pay amazon for all these services than to hire engineers be able to do what you are talking about. For a certain type of company (well established, known resource loads) it can make sense now (though risk huge lock-in costs and wont be able to move fast enough when they arrive), but I've seen/worked at many start ups that try to do the full cloud setups above and just get blown out from just costs alone.

The last company I worked at (south east asia start up, with just me originally as the dev, but grew to about 5 more devs when I left) we only had s3, cloudfront (though we put cdn77 infront of it cause bandwidth in SEA is a rip off with aws), and rds with postgres (and I wanted to move away from this cause backups and resizes from sql dumps are a PITA and take way too long when you have to do it over the network vs on the instance) and ran every other service we needed on ec2 instances (load balancer/auto scaling with nginx+3rd party modules + python daemon with boto3 api, rabbitmq, solr, memcached, dnscache, pgbouncer, letsencryot, deployments to prod/staging/one off envs for random stuff with fabric building what was needed on a workbench then spin up instances from an AMI image [1/4 the cost compared to using docker, with init scripts that turned off/on applications depending on what the server was need for] to about 40+ machines at max site load]) and had to support 40+ customer domains which overall was 8-10x cheaper than the "all-in" AWS abomination an AWS consultant cooked up before we went that route.


No, with the cloud providers - they manage the guests. You don’t have to know anything about how to manage the software. The difference as far as “turning on automated updates” and what your cloud provider does. Is that they manage the updates and have an entire team to test it.

Most of the services I named are just that services - you don’t even think about the underlying process.

It literally takes two clicks for instance to stand up a massively parallel data warehousing solution on AWS or an Hadoop cluster. It’s a few lines of yaml to set up most of it.

It didn’t take a “large number of people” to manage the software. It took two full time people - one in the US and one in India. For the most part, there is very little to manage or configure. Everything is provisioned with a set of yaml files.

Also, what happens when that one person gets hit by the lottery bus or goes on vacation?


For context. My first exposure to the cloud was at my last company of 100 employees. We aggregated publicly available (ie no PII) health care provider data from all 50 states and government agencies as well as various disease/health dictionaries and we combined it with data sent to us from large health systems.

These are the services we used.

Infrastructure

- Route 53 (DNS)

- SQS/SNS (messaging)

- Active Directory.

- Cognito (SAML/SSO for our customers)

- Parameter Store/DynamoDB (configuration)

- CloudWatch (logging, monitoring, alerts, scheduling)

- Step functions (orchestration)

- Kinesis (stream processing). We were just introducing this when I left. I’m not sure what they were using it for.

CI/CD

We used GitHub for source control.

- CodePipeline (CI/CD orchestration)

- CodeBuild (Serverless builds. It would spin up a Windows or Linux Docker container and basically run PowerShell or Bash commands)

- self hosted OctopusDeploy server.

Data Storage

- S3 (Object/File storage)

- Redshift (OLAP database)

- Aurora/MySqL (OLTP RDMS). When we had large indexing to do to ELasticSearch, Read Replicas would autoscale.

- ElasticSearch

- Redis

Data Processing

- Athena (Serverless Apache Presto processing against S3)

- Glue (Serverless PySpark environment)

Compute

- EC2 (Pet VMs and one autoscaling group of VMs to process data as it came in from clients. It ran a legacy Windows process)

- ECS/Fargate (Serverless Docker cluster)

- Lambda (for processes where we needed to scale from 0 to $alot for backend processes)

- Workspaces (Windows VMs hosted in the US as Dev machines for our Indian Developers who didn’t want to deal with the latency.)

- Level 7 load balancers

Front end

- S3 (hosted static assets like html, JS, CSS. You can serve S3 content as a website.)

- CloudFront (CDN)

- WAF (Web Application Firewall)

All of the above infrastructure was duplicated for five different environments (DEV, QAT, UAT, Stage, Prod). In Prod, where needed, infrastructure was duplicated in multiple available zones (not regions).

Where applicable, backups were automated.

We had two full time operations people. The rest was maintained by developers. ——- as far as the rest.

> [Procuring resources] is no different than it is locally.

I can go from no infrastructure to everything I just named in a matter of hours locally? I can set up a multi availability zone Mysql database with automated backups just by running a yaml file locally and then turn it off when not needed?


Most of what you're listing are Layer 7 services. The time cost there is in the configuration. You can put Active Directory in the cloud, but it's still going to be Active Directory, i.e. a massively complicated proprietary framework that touches every Windows system in your network like an octopus.

And some of those things actually make sense. You can't really locally host a CDN, can you? If you need a big amount of compute for an hour and then never again, it doesn't make much sense to buy hardware for that.

But the point isn't that it never makes sense to put anything in the cloud at all. It's that companies regularly overuse it as some kind of buzzword panacea when there are only a specific set of things that it's actually good for.


It’s not just “configuration”. There is also the issue of continuous monitoring and upkeep. Not to mention someone has to worry about servers going down, hard drives going bad, backups. Would any one person know how to configure and maintain everything above?

I’m a developer who happens to have AWS in my toolbelt. I could set all that up by myself. In the the two years that I worked there, we never had an issue with any of it.

How much in house expertise would we have had to hire to manage everything that we used?


> There is also the issue of continuous monitoring and upkeep. Not to mention someone has to worry about servers going down, hard drives going bad, backups.

Monitoring and backups you configure once and then they're automated. Disk failures happen maybe a couple times a year and take five minutes to stick in another disk. None of this is particularly labor intensive.

> Would any one person know how to configure and maintain everything above?

It has long been a fact of life in companies small enough to have a single-person IT department.


Disk failures only happen once or twice a year when you have an infrastructure required to support the infrastructure the size I mentioned if you were running on prem - across multiple geographically dispersed locations? How many people are you going to have to hire to make sure it’s running? Patched? Will one person know how to optimize the 25 services we used? All of those services are redundant across multiple servers.

As far as the backups, how long would it take you to recover from a disaster? Do you have a team of people who are experts on all 25 different surfaces?

It was a pain when I did my first and only on prem from scratch infrastructure that consisted of multiple environments just running Consul, Fabio, Nomad, Vault, Mongo, Memcached and Sql Server. We also had on prem builds and deployments using agents orchestrated by Visual Studio Team Services (now called Azure Devops, basically TFS online.)

The prod environment was running clustered for all of the services. I left off my original list Amazon Certificate Manager that automatically provisions and renews SSL certificates. The IT department had to keep up with SSL certificates.

Our on prem infrastructure was a pain to manage and didn’t have anywhere near the reliability. I know we didn’t keep everything up to date and patched.

Backups were a pain to manage and no one ever verified if they worked.


I don’t think it’s mutually exclusive. They might be commoditizing a complement to get market share or find areas of vendor lock-in to raise prices later. I.e. they could be seeding commodity tools that their customers can plug in, while also generally lowering prices for a more pressing immediate land grab before raising prices later.


The cloud has been a viable alternative for about a decade and still has only about 4% of enterprise spend combined. By the time the market is saturated and the only path to increased profitability is raising prices, most of us will be retired. Is that really the most pressing risk that most business have? In any of our lifetimes, when has the cost of technology ever increased- besides phones?


Interesring! So the more technologies, the more pay, as it will be harder to find specialists. But while it takes years to get good at something, say a CRUD programming language, that experience will be useful for other CRUD languages. But managers do not understand that so it can be very hard for a specialist to find a matching job when the combination of platform x languages x frameworks x transpilers get too big. Another interesting fenomen is that such specialists so quickly become obsolete ....


An aside, but I was so confused by the "dot slash" link ("./") until I realized he means "/.", Slashdot.

EDIT: Edited because people got upset.


Wow, I deeply hope no one’s calling that a nazi salute. They don’t get to own that.


I think that's what it's called, but I may be wrong.


I've never heard of it before and I think it's a bad name, so I'm not going to use it.


I didn't say you should use it? I don't understand the reaction.


> I think that's what it's called, but I may be wrong.

You're wrong: that's not what it's called.


Nobody thinks that.




Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: