Millions of (poorly coded) bots relentlessly crawl the web to detect and spew junk content into any form they find. The go-to countermeasure is to force everyone to complete a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). CAPTCHAs are those annoying user-hostile tests where you type in skewed letters or identify objects in photos. They require cultural familiarity, introduce accessibility barriers, and waste everyone’s time. Instead of using a CAPTCHA, you can detect and block many bot submissions using completely unobtrusive form validation methods.
Obligatory disclaimer: this post is my opinion, and my opinion only. It does not represent the opinion of my current, past, and future employers. Indeed, as always is the case for my blog, this post is not signed off by anyone in my reporting chain. While there are some references to my work of the past decade, all of the information I’m talking about is open knowledge.
A few weeks ago I had a heated discussion on Twitter about User-Agent strings and browser fingerprinting, topics for which I have been following, and taking part to, for over ten years by now. Unfortunately, the discussion is still at the same level as it was a number of years ago, and I think this is to detriment of the users.
Let’s start with what fingerprinting is and isn’t. The name “fingerprint” suggests that you can uniquely identify a specific browser among the total population, but this is not what fingerprinting tends to be, among other things because it’s impossible to observe the total population. Instead, fingerprinting can be used to build connections between actions at different points in time, when a stronger pseudo-unique identifier is not available.
I’m going to explain this as a metaphor, because I think most people reading this would be familiar with TV crime dramas and police procedurals. Say that your investigators are looking for a car that was present at a scene of a crime. they don’t have a license plate, VIN number, or other uniquely identifying information, but they may have the make and model, the colour of the paint, and maybe the description of an eye witness that didn’t think of reading the plate but noticed something of it.
These details are not enough to identify a car out of the whole population of cars, in general: unless it’s a very specific classic car for which only one is known to be painted that certain colour, they will still have way too many cars that could possibly be the one they’re looking for. But if they also have a way to limit the population further, that can be a lot more useful. Say that the scene is behind a gate, and so instead of looking for any car of that make and model and colour, they’re looking for a car of that make, model, and colour, owned by someone who has a key to the gate. Now we’re talking!
Various characteristics that can be observed for a web browser metaphorically match the characteristics of a car, which is what EFF’s Panopticlick (now replaced by their Cover Your Tracks application) was trying to show people. Unfortunately there’s a significant difference between the way Panopticlick could figure out how likely a certain configuration is versus a car: at least for the most prominent characteristics, law enforcement agencies have databases, so they could tell you how many other cars with that particular trait exist out there.
The car analogy fits for another point about browser traits: while some of the traits can be changed, to confuse your tracks, it’s well possible that an attempt at disguise might make your browser stand out a lot more. I hope my old example is out of date, but it used to be that Firefox would not be accepting WebP images, which meant if you received a request claiming to be from a Firefox browser, but including WebP in the list of accepted image formats, you knew it was a fake User-Agent string. That would be like taking a Tesla and putting a Fiat Panda badge on it: nobody would believe it, and likely they will remember seeing a very ironically modified car
But how did this whole topic come up again? Well, turns out that at least some Linux distributions are still injecting their name and version in the User-Agent string of the browsers they package (including libraries used as reusable components), and at least some of their developers don’t see the harm this can cause their users. That suggests that more explanations are required.
Adding the distro name and version to User-Agent is the equivalent, in the metaphor of a car, of putting a sticker of the dealership who sold said car. This turns out to be a fairly common thing to do as well! It wouldn’t be a very specific trait, unless it would be for a dealership that would have sold very few of the cars under consideration, for instance because it’s from a different region, or because it went bust a few years before. And any Linux distribution would be something odd on a general audience website.
So what is the argument for adding this sticker? At least for the developer who kept insisting that this is not harmful to the users, the reason to have the name of the distribution (the “sticker”) is to show services that they should support the distro because they have a number of users using it. It’s an argument that I can sympathise with, but I don’t think it is reasonable.
First of all, we’re talking about an opt-out feature, that is, all of the requests that are being sent are effectively “branded” unless the user knows to turn this off. I don’t think that’s fair, because the vast majority of users are not developers, and wouldn’t know how User-Agent strings work. Unlike the dealership sticker, which a car owner would be noticing and decide to take off if they didn’t care for it, User-Agent strings are not shown to the user in day to day browsing, and users would likely be unaware how much information they are providing unless they stumble across EFF’s Cover Your Tracks. This kind of opt-out features raise absolute and righteous outrage when they are rolled out by the likes of Microsoft or Google, so I find it hypocritical to think it’s fair game to do the same for a Linux distribution.
The second point is that doing this to show the services that a specific Linux distribution is worth supporting feels myopic, for multiple reasons. Let’s start from the very obvious one: most analytics platforms do not care to give a breakdown of per distro visits. AWStats did and probably still does, but Google Analytics definitely doesn’t. WordPress basic stats don’t even bother bother giving you a breakdown by operating system, let alone distribution. Most of the self-hosted analytics software doesn’t seem to care about it either. Since I don’t work on front-facing services, I don’t even know what the analytics software used in Big Tech would show, but I can take an informed guess that nobody would be digging into how many users are using Fedora versus Ubuntu versus Arch Linux, unless they specifically focus on Linux in the first place. And they they may as well just ask, I can’t remember when was the last time I met someone who uses Linux and wouldn’t talk about their favourite distro. I am told that MediaWiki might have a per-distribution breakdown, which makes sense particularly as many distributions use it for their own wiki, but… yeah I don’t think it represents a majority of users.
For the record, according to Google Analytics, only just over one thousand visitors of this blog in the past five months used Linux, compared with nearly three thousands using Windows, and two thousands each using iOS and Android. And this is a fairly biased view, since this blog is about tech and Free Software in the first place, and in this period Hacker News featured two posts of mine in their front page.
What I’m trying to say is that for most big, global services, the question is unlikely to be “Should we support Fedora?” but rather “Should we support Linux?” in the first place. Which is, by itself, the right question to ask in my opinion. Distributions being different for difference sake have been a plague for Linux as a whole, and when it comes to web services, the fact that browsers provide a well standardized platform is a great upside. The only providers that would have to care about the differences between Gentoo and Arch Linux are those providing services outside of web browsers, and I would venture a guess that they wouldn’t want to rely on web statistics, as you may use a different browsing device compared to the system you use the service from (take for example Tailscale.)
That’s what I call a marginal upside for the project that applied the sticker: the vast majority of operators won’t even notice, and even those who will, might not care about the distribution as much as they care about the browser and its version. On the other hand, the “sticker” applies to an even smaller subset of an already small population, which makes identification of a single user interaction a lot easier.
Now, those who know me know that I have a nuanced view of privacy, as I expressed before. This means I don’t generally feel like I need to hide myself from big organizations and law enforcement, but that does not mean the same applies to everyone! Particularly with the way the world is going, not everyone is playing on the lowest difficulty level, so I think it is important for people to make informed decisions — and for Free Software developers to make decisions that are kind to users. Which is why I wouldn’t have a problem if this branding was opt-in, and well explained at first installation, just like Windows does.
Requiring to opt into a lowered privacy, even if lowered by a negligible amount, is a difficult path to take, I grant you. I said so myself, that to be invisible to analytics platform means your preferences are going to be ignored. Home Assistant makes a good case for why you should opt into the diagnostics statistics, as it helps them prioritize integrations that have the most users, but I don’t know how many people do actually opt into it right now. And Home Assistant is doing it “right” in my opinion, by making the anonymized statistics available to everyone, while a branded User-Agent would be distributing the statistics across many services, most of which will not be available to either the public or the brand owners!
Opt-in analytics are harder, also because they are vastly transactional. Lots of time passed from the advent of Clubcard, but most stores still build their analytics based on loyalty cards and signed-in discounts. Entire businesses exist to analyse spend across stores in exchange for single-digit percentage cashback. We don’t quite have pay-to-surf options anymore, but plenty of stores, financial institutions, and others (even Microsoft!) are happy to provide you with enticing discounts on online shopping if you install their extension that can provide anonymized insights on your online behaviour. Linux distributions rarely have any opportunity to offer this, but that does not exclude them from the same expectation of privacy that users are getting from other operators.
Finally, a common refrain for this is “But what about $vendor?” Whataboutism is a common problem in many fields, and Free Software is not immune to this. I personally would want to consider Free Software projects as more ethical than other vendors, but since I already said that “Kind Software” has only a partial overlap, I shouldn’t be surprised if instead of doing the best thing for the user, some projects would rather take the most value they can get away with.
Since I started looking at User-Agent strings and browser fingerprinting in general, we had a significant amount of wins for user privacy, as well as a number of regressions. Mozilla successfully reduced the variance of their User-Agent by freezing the Gecko trail, while both Apple and Google attempted freezing the whole User-Agent, with mixed results and a lot of conspiracy theories being thrown around because of it. Personally, I have at least successfully argued against providing the Android ROM version in the User-Agent string of Chrome for Android, which I’m very proud of: given how this version string changed across different providers even for the same model, it was a significant amount of entropy injected in the string!
User-Agent is, quite honestly, a legacy string by now. Chrome and Edge have been pushing for the usage of Client Hints to provide more details about the client platform in use, and that is even more of a fingerprinting issue, even though it does require active participation from the browser, rather than acting as a passive source. The fact that Cover Your Tracks does not seem to attempt showing those hints made me a bit sad. But being a passive source of information is a double-edged sword, in particular it means that you can (possibly) look back and tie together sessions based on this string alone, even if not with the highest of confidence.
I’m not coming here with oven-ready solutions (that would anyway be thrown into a microwave), but rather with food for thought, and to the idea that we should be more considerate towards our users. People who are at risk should not have to learn which combination of common traits does not stand out, and should not have to be told “Actually, just use a non-Free platform to hide in the crowd.” But these are not new topics, as I wrote before how little the community as a whole appears to care about the hard yet impactful problems.
Disclaimer: this post is not financial advice. I’m going to be talking very vaguely about my personal experience when it comes to lotteries, to give context on why I’m particularly grumpy about certain attitudes in the world, in my bubble, and even in Free Software.
For what I’m about to talk about to make at least a sliver of sense, I will have to give some personal context, which is not something I’m very comfortable doing given the current day and age, and the fact that, quite honestly, I’m uncomfortable bringing it up, as it gets close to feel like hiding behind an excuse for a number of past mistakes or just for the way I am now. Take it with the appropriate amount of salt.
I grew up in Italy, and while I can’t say that I grew up poor (there’s a lot more people in much worse situations), I also cannot say I was raised in an affluent household. My father was a blue collar worker in the local chemistry industry, which had been downsizing ever since my memory starts, to the point that for a number of years the only income we had as a unit was his unemployment benefits and his disability pension. Things have, at times, tough, but at the very least we always had a roof above our heads since the house is my mother’s.
As it happens, like most people in the situation of having access to some cash, but generally not quite enough, we have seen our fair share of “get rich quick” schemes trying to make us (or more often, my parents) into victims. Some actually managed to. Probably the biggest one has been the lotto, which is (or was) a State sponsored lottery that caused my parents to waste quite a lot of money over the years, and nearly got me into the same spiral. That’s the topic of this post.
Note that there were, and are, multiple state sponsored lotteries and “games” in Italy. Some are more chance-based than other, but for many many years, the whole concept of betting was considered taboo. That itself feels a bit strange, but I’m not an expert on the topic.
For most of the follow-up discussion to make sense, you need to get a vague understanding of how the Italian lotto works. I’m sure there’s plenty of texts, books, Wikipedia pages, and so on that can explain the game, rules, and history of it better than me, but it’s only the superficial view that is important in this context. Lotto is similar to the game of bingo: every so often, five numbers between 1 and 90 included are drawn at random — the players’ objective is to select a certain number of numbers beforehand, and hope that one or more of them are present in the drawn selection.
The game is old, and has some vague historical legacies around it, despite changing often. I first remember it when the chosen numbers were written on pieces of paper coming from some central government body, and stamped with a rubber stamp by the owners of the stores that were providing the play area. And the drawing only happened once a week on Saturday. Growing up I saw the system being replaced by a digital, computerized, and even networked system managed by a company called Lottomatica, that by the design of their network turned out to be the perfect provider to pay bills, fines (parking and speeding tickets), and prepaid cards top-ups over the years. I believe they are still effectively in use today for those use cases.
The frequencies and amount of ways to play this game also increased over the years, which to me spell that whoever is managing it clearly understands addictive gambling behaviour. As I said, growing up I remember it being drawn only on Saturday, then they introduced an extra draw on Wednesday, and eventually one more on Tuesday. I don’t know if they added more since. At my first recollection the drawings were also done in ten different cities across Italy (the “wheels”), so instead of just five numbers, you would get fifty numbers every drawing — this was not always the case, as cities were added and removed over over hundred years history of the game, and I think they introduced an 11th “wheel” a few years back as welll.
To make things particularly more complicated (and thus, cynically, extracting more money out of fools like my parents), they allowed selecting multiple combinations of numbers — so you could play three numbers, and win on any one, two or three of them coming up, but you could also play six numbers and win on any of those combinations as well. But how much you would win would decrease significantly based on the possible win combinations, with the wheels selection also linearly decreasing the winnings, as the default “play” would split your bet equally across how many “wheels” you selected.
Now, if you think of it rationally, short of fraud (which admittedly, in Italy, is not something you can dismiss so easily), these drawings are completely independent from each other. If you were to always play the same exact numbers for a long time, obviously eventually they will hit a combination — but the return on the bet is unlikely to be useful. A mathematician would be more suitable than me to look into that.
But people don’t think of this rationally: books are published that suggests what numbers to play based on the dreams you had, late night shows advertise premium call numbers that can suggest you the right numbers, and, most importantly for what I’m talking about here, “newspapers” are published trying to rationalize the frequency and likeliness of a certain number being drawn on a certain wheel.
Back at that point, my parents were going quite heavily invested in lotto, and were buying those no-news “newspapers”. My mother started collecting the historical printouts of the drawings since the game started, trying to find out some pattern in the numbers to figure out what would come out — and play dozens, if not hundreds, of combinations every week, then twice a week.
As I said, I ended up getting involved in this — with hindsight, I was just happy to spend time with my parents on something that they seemed to care about, but I can definitely see that the whole enthusiasm was also stoked by the way the State-owned national TV shone more light down the additional drawing, including the additional lucky draw of non-winning tickets, which I had stamped (with our address) by the hundreds for sure.
Now, among the things that made me involved into all of this was the fact that some of those newspapers started including listings of BASIC and QBASIC programs that purported to analyze and predict the numbers that would be drawn in the future. They were obviously nothing real, but that didn’t stop them from making them interesting for a young inexperienced boy who wanted to learn proper programming, so I have spent a significant amount of my time at that point either trying to adapt the GW-BASIC listings to work on a Commodore 64, or to combine the multiple QBASIC programs into a single unified interface.
To be honest, I could have learned a lot more about data structure and integration, if they actually bothered explaining the design instead of giving the listing — but that would probably have shown just how pointless the whole pseudo-analysis was.
Personally, I have to assume that the reason why it took me a long time to realize how much of a waste of time and money this was, was that I hadn’t had quite the concept of money in my head at that point, and that I did manage to actually win some money after a dream, which obviously convinced 13 years old me that there had to be something about this whole meaning business (obviously, there isn’t.)
Why am I digging up this past, and painting myself and my parents as fools? Well, there’s two main reasons. The first is that I am still angry by how much disservice the Italian State had at that point done to its own people! Feeding a gambling addiction by increasing the amount of chances people can take, running high budget shows on national TV, and not regulating at all the space that crawled with scammers and fraudsters claiming to be able to predict what the next numbers drawn would be. Ironically, nowadays I’m also wondering why at that point it was perfectly acceptable for people to throw money into the lottery, but most sport betting was banned: at least the latter has an element of analysis based on the ability of the sports players involved.
The second reason is that current affairs are not particularly far from that situation. The “Crypto Crash” is leaving enough people with no savings, and the “fools” who listened to advice to move everything into this or that currency are the ones taking the blame. The regulated markets are also taking a nosedive, particularly when it comes to stock that was seen as a way to make lots of money quickly, rather than having a steady track record of maintaining value over time. And this is upsetting me because I know that many of those people are in the same situation as my family used to be — having enough savings that it’s worth investing, but not enough that burning a huge chunk of them into a bad market deal wouldn’t be hurting.
And just like for lotteries, the people who keep trying to tell you they can make you rich, are the ones that are taking their money home. Not with the lottery, not with cryptocurrencies, but with everything that goes around it: it was newspapers and books, that were cheap enough to churn out and had plenty of suckers to pay for, and now it’s courses, initial offerings and the general “Give me money and I’ll make you all rich” approach.
There’s even more similarity if you look around for it. Objective technical innovation could be found even in an environment that was and is unhealthily playing to addiction and fraught with fraud: the people working on Lottomatica in Italy built a great system – from the technical point of view – that allowed an instantaneous recording of a “play” through a tamper-proof receipt, that reduced the risk of fraud from those running the shops that allowed you to place your bet, and it was flexible enough to become a de-facto payment terminal for so many other governmental and non-governmental uses, similar to Japan’s konbini based system. And at the same time, the presence of so many outlets to give advice, fake or real than it was, provided employment for likely hundreds of people, technical and not.
But this was, and still is, exploiting the gambling addiction of people. It doesn’t matter how often the ads may say “When the fun stop, stop” — yes you can technically gamble or “play” these games just to enjoy yourself, but that does not make them any less dangerous to those who have the most to lose. And funnily enough, I don’t believe that “democratizing” these makes for a better world — indeed, I am in favour of raising the barrier of entry, in the way that casinos in Italy used to enforce a dress code, or various regulators define required tests to pass before being able to buy into complex markets.
I know that this is a futile attempt, but I really wish to hope that at least in the world of Free Software we will stop promoting harmful technologies, and realize that just because you and your peers can distinguish a scam from something real, it doesn’t mean that those who don’t are fair game to take advantage of.
I normally don’t write about cryptocurrency in Bits about Money. It gets far too many column inches relative to its actual importance in the world, which is minimal compared to other financial infrastructure. I prefer writing about that extremely undercovered topic. But, much like Matt Levine feels professionally obligated to keep up with Elon Musk drama, I can’t avoid writing about stablecoins after the May 2022 collapse of Terra (UST).
As always, Bits about Money is my own opinion. My employer Stripe, which frequently has differences of opinion with me regarding cryptocurrency, has recently announced a product which uses a particular stablecoin (USDC). I have made de minimis usage of USDC personally partially out of technical interest and partially to use a prediction market. I find prediction markets intellectually interesting and have been an occasional user of them for almost 20 years. They have poker's thrill of intellectual ritualized combat and take advantage of my absolute incapacity to resist any textarea that could carry the message Someone Is Wrong On The Internet.
Stablecoins in a nutshell
A stablecoin is designed to be a deposit (i.e. money) recorded in a slow database which is negotiable among other actors who use the same slow database. This compares to deposits at e.g. banks, which are typically recorded in a faster database and are negotiable either at the bank or, through the banking system, at other users of money.
Stablecoins are typically contrasted with other cryptocurrencies, such as Bitcoin, because their unit of account is linked to a government-issued currency (overwhelmingly, the U.S. dollar) rather than more speculative assets one could record in a ledger maintained in a slow database.
Stablecoins are big business relative to most startups and miniscule relative to the money supply. There are currently a bit more than $150 billion issued, which (if they were consolidated) would be in the same weight class as a large regional bank like e.g. Fifth and Third.
Uses of stablecoins
In principle, you could use stablecoins as money, like how you use deposits as money. Stablecoins are not used like money; rather than facilitating almost the entire diversity of transactions in the economy, they are overwhelmingly used for a few niche use cases.
The cryptocurrency community often explains that the core use for stablecoins is for moving money between cryptocurrency exchanges to assist with arbitrage. This is not the dominant use of stablecoins. The dominant use is actually collateralizing investments in popular products with embedded leverage, such as Binance’s USDT/BTC perpetual futures contract. Perpetual futures are themselves a fun rabbit hole, which might have to wait for an essay of their own. Exchanges like them because they allow fat largely sub-rosa fees; institutions like them because they’re extremely capital-efficient; retail likes them because they allow high amounts of margin (which gamblers perceive as amping up their fun).
An emerging use case for stablecoins, which is not yet dominant, is that they’re programmable money that can be easily operated on by smart contracts in “decentralized finance” (DeFi). DeFi is something of a term of art; we’ll come back to how decentralized some popular offerings actually are in a minute. DeFi’s current raison d'etre is borrowing/lending cryptocurrency to allow increased leverage by traders, if one is charitable, or creating financial games which are Ponzi-adjacent (memorably described once as “boxes”) if one is less charitable.
In principle, stablecoins could be used to settle transactions. In principle, two crypto users could pull out their phones, share a QR code (to show the receiving wallet address), and send stablecoins over without their wallet providers needing to further coordinate. In practice, this is uncommon, because it is a higher-friction more-expensive slower Cash App which does not yet have the society-wide network effects of competing ways to settle transactions.
But the principle is interesting! It’s virtually impossible to find a tech investment fund which thinks that bank wires, for example, have the appropriate amount of friction and ceremony associated with them, and some tech investment funds use stablecoins to settle investments. Depending on the slow database used, this costs about the same as a bank wire (negligible relative to the investment) or less, is much faster, requires no coordination with a banker during bank hours, and may be substantially less likely to be disallowed if e.g. one of the counterparties is in another nation.
And slow expensive Cash App is a Cash App one can download without e.g. having sufficient legibility to the banking system to use fast cheap Cash App. Plausibly that is at least an intellectually interesting point in the multidimensional vector space of all possible Cash Apps.
You’re probably not here for a deep dive on slow database technology or fast database technology, and even if you were I'm not here to write it. Fast databases are complicated, impressive technical artifacts.
A more interesting question is how privately issued money gets and maintains parity with publicly issued money. There are a few different mechanisms for this.
Money market fund style stablecoins
A money market fund is a specialized form of investment vehicle designed to have the desirable characteristics of a deposit (liquidity on demand and virtually riskless) while having more yield that deposits do. This is typically achieved by having the money market fund invest in short-term high-quality commercial paper or government-backed securities. Money market funds had a rough go of it during the seize up of the treasury repo market during the global financial crisis, a story underappreciated but told in many other places, but be that as it may: fix the image of a money market fund in your mind.
Got it? OK. Now change the money market fund’s fast database to a slow database, make individual units of it movable without cashing out, and set the management fee to equal 100% of interest income. The tickets are now diamonds. The money market fund is now a stablecoin.
USDC, issued by Circle, is the largest money market fund style stablecoin (and 2nd largest stablecoin overall). Their backing is held (currently) in cash and low-duration U.S. government issued securities.
The money market model is, relative to other ways to construct stablecoins, boring. Boring is a feature! Boring means that the stablecoin operator can’t get seigniorage income through digital alchemy. Boring also means that the stablecoin is unlikely to see its value vaporized under conditions of market stress.
Stablecoins are “pegged” to something, typically the USD. A peg is a story about why two things which are not the same are, in fact, similar enough to be treated interchangeably.
The story for money market fund style stablecoins is that, while in normal circumstances you would just hold the stablecoin or move it around, you could at any time return it to the operator and receive actual money at par. Like money market funds, these stablecoin operators have high confidence that their net asset value (NAV) is always almost exactly $1 per unit outstanding. Even under conditions of substantial market stress, one does not expect e.g. short-term Treasury bills or high-quality commercial paper to become illiquid or trade at a discount, even at large sizes.
That’s not an ironclad assumption! Again, it was false in 2008, when money market funds collateralized by Lehman Brothers commercial paper or Treasury repurchase agreements (repos) suddenly found their collateral impaired or illiquid. But it is what these boring stablecoins go with.
Another similar stablecoin is Paxos’ USDP, which is much smaller than USDC. Partisans between the two would probably say the main thing that differentiates them is the regulatory regime each operates under. As someone who is Switzerland here, I’d say “Meh, pretty much equivalent, and pretty low risk by the standards of cryptocurrency things.”
Part of that judgment for low risk is that these coins have enthusiastically courted engagement with U.S. regulators. Unwillingness or inability to bow to the demands of regulators, some of which make the coins worse qua products, is one reason why other stablecoin entrepreneurs went with the models discussed below.
One demand, for example, is that the stablecoin sponsors comply with Know Your Customer (KYC) and anti-moneylaundering (AML) regulations similar to the way other money services businesses have to. Enthusiasm for KYC and AML regulations is, to put it mildly, not universal in the cryptocurrency community. It will inevitably result in having to tell the user that the user can’t do the thing they want their money to do, at least some of the time, but current practice suggests that even enthusiastic compliance with these regulations results in an equilibrium far less frictionful than prevails for e.g. wire transfers.
Another demand was for more conservative collateralization than USDC previously used. That was unfortunate for USDC, since it costs them interest income, but regulators remember 2008.
Equity-backed stablecoins are sometimes called “algorithmic” stablecoins, to suggest they have the predictability of a well-operating computer program (on the way up) and blame the loss of billions of dollars on software rather than identifiable people (on the way down). I don’t think this is a particularly helpful frame. The interesting engineering is financial, not software.
Say you have a business. That business has equity value. If that business also plugs into many counterparties, keeps a ledger of money it owes to counterparties, and allows those debts to be transferred via any mechanism, that business could theoretically function as a payments rail. The business could choose to do this via letters carried by courier, via a slow database, or via many other methods.
How does the business maintain confidence among its counterparties that its debts are always worth face value, even if it doesn’t actually pay those debts back in any given period? By reference to its equity value.
If you continue to believe that e.g. Netflix has a large equity value, and equity takes impairments before debt does, then a Netflix bond (or Netflix Dollar) should maintain its value, all else being equal. Matt Levine has a great explanation of this.
Netflix is in the business of overpaying for mediocre content and distributing it really well, not in the business of facilitating payments, and so there is no convenient way to swap Netflix Dollars. People holding dollar-denominated liabilities from Netflix mostly just ask Netflix for their money, and charge Netflix an interest rate when Netflix desires not to repay those liabilities for a while.
But in principle, if Netflix invested engineering and partnership effort in making Netflix Dollars transferable on demand, Netflix could easily issue a stablecoin valuable only by remote reference to Netflix equity. Sophisticated marketplace participants would continue to believe that Netflix was good for its debts because we live in a society and because other sophisticated marketplace participants seem willing to believe that Netflix will eventually produce positive future cash flows meriting a generous current equity valuation. And, this is really really important, the size of the equity totally dwarfs the anticipated number of Netflix Dollars in existence.
What if you wanted to make Netflix Dollars but didn’t want to spend decades on a DVD business, streaming infrastructure, and content deals? Or if you wanted to issue a lot of Netflix Dollars, like billions of them, so many that short-term swings in the value of Netflix’s totally-real-all-must-acknowledge-some-people-do-pay-money-for-their-thing business might imperil the statement “There’s a whole lot more Netflix than there is Netflix Dollars”?
Well, you can conceivably make a Netflix Dollar out of any business. Even a fake or fraudulent one.
Let’s talk Terra USD, which was vaporized earlier this month.
Terra USD, which I’ll call Terra for convenience, was an equity-backed stablecoin. The equity was in the form of a sister token called Luna. (Cryptocurrency enthusiasts sometimes like to feign ignorance about tokens being equity claims, principally as a form of regulatory arbitrage. Luna is worse-is-better equity; worse in that it has far less protections than equity, better in that it could be sold to retail without getting one put in jail (yet).)
Why is Luna equity valuable? The argument was that Luna was an equity interest (“Utility token!” Quiet, you.) in the business that was the operation of the Terra slow database. Terra Labs and other parties would make the slow database available to software developers in return for ongoing fees, which would make Luna valuable for the same reason “sell an underwhelming database subscription with a complicated pricing model over time to developers in exchange for money” makes Oracle equity valuable.
Terra Labs added a feature to the slow database which would allow one to use the slow database to exchange Luna (equity) for Terra (the stablecoin) at par. If Terra traded below the $1 peg, arbitrageurs could buy it, redeem for Luna, sell the Luna (somewhere), and profit from the difference. If it traded above the peg, you could run that trade in reverse: buy cheap Luna, redeem for dear Terra, sell the Terra, now you’ve got more money than you started with.
All pegs are stories. This story didn’t sound like a very good one, unless the Luna equity was valuable. Software company equity is more valuable when lots of people are doing interesting valuable things with the software.
So Terra Labs concocted the existence of an interesting valuable thing. They wrote a program using their slow database called Anchor. Anchor was an automated program to allow moneylending. There are many of these in DeFi land.
Many of them use an aggressive growth hacking strategy, which is saying “If you use my program, I will give you equity in my company.” This strategy is commonly called “yield farming.” Anchor was very aggressive about this, promising 19.5% APY yields on their stablecoin. To use their program to get 19.5% yields you needed to buy into their database software, which made their database software seem to be more valuable (“Look at all the users!”), which caused the value of their equity to go up, which allowed them to redeem their own equity for more money to throw at user acquisition, which…
… created a Ponzi scheme. With extra steps.
Was this a complicated bit of financial engineering? Was it conducted in secret by an elite priesthood who needed years of education to even understand the acronyms at play? Nope. It wore its I’m Going To Blow Up And Take You Down With Me heart on its sleeve. I looked at it about two minutes and then confidently tweeted about the mechanism and inevitable fate.
The cryptocurrency community did not cover itself in glory regarding Luna, because Luna made the right people an awful lot of money in 2021. As cryptocurrency venture capitalist Nic Carter has explained at length, it was apparently “the trade” of 2021. (His On The Brink is the best podcast in crypto, by miles.)
In early May, Terra Labs announced that it was going to stop subsidizing users of its computer program. Users stopped using it, never having had any reason to use it other than collecting Terra Lab’s subsidy. This caused the perceived value of Luna equity to decline, which put pressure on the peg, which caused people to exit the pegged stablecoin, which decreased the use of the slow database and further hurt the implicit equity value of owning the slow database, which…
The technical term for this is a “death spiral.”
On May 8th, Terra was the 3rd largest stablecoin in the world, with $18 billion in assets. Luna had recently been worth more than $30 billion.
It is, as of this writing, less than two weeks later. The shenanigans aren’t over, because Terra Labs thinks that it can trick people again, but many tens of billions of dollars were lost and will not be recovered.
All pegs are a story. This story will probably fill a book. The part of the book before the depegging is fiction; the rest is history.
I consider this effectively inevitable for the seigniorage model, because no business, not the most valuable business in the world, can continue having a high equity value relative to “all the money anywhere”, and to the extent a business incubates a non-neglible seniorage-backed stablecoin in it, increased adoption of that product will forever represent a (growing!) threat to the business. The better that product gets and the faster it grows, the worse the threat.
Choosing to subsidize the use of the product was an accelerant but probably wasn’t even necessary to cause the collapse, and indeed we have seen several similar schemes (Iron Finance, Basis, etc). This is a product unsafe at any size. (I really enjoyed Preston Byrne’s writeup of Basis Coin back in 2017 and, if you read it, Terra was visible a mile away.)
Don’t want to comply with pesky government regulations? Do want to award yourself a license to print money? Then pretend to be a money market style stablecoin, but just lie about it. Cover redemptions with other people’s money that you misappropriate. Spread the wealth around to co-conspirators both directly and by driving up the price of assets you mutually depend on.
You are ride-or-die on the lie.
As long as everyone is making money, no one will look at you too closely. Your detractors will sound like crazy people; your co-conspirators will hopefully be extremely skilled at regulatory capture. Buy a president or prime minister. Buy several. Buy an entire sovereign nation and use it like Saul Goodman would use a nail salon.
You can afford it.
Any reason for optimism?
Successful proofs-of-concept are one way that the world gets better. It isn’t a law of nature that the right amount of ceremony and cost for international money transfers is the amount it would take to do things between banks. Wise (nee TransferWise) experimentally disproved that, and brought an excellent product to market.
Money market style stablecoins don’t look to me like the obvious future of money movement. But they’re an argument with executable computer code attached. There is at least some possibility that that argument produces a user experience compelling enough, at risks society can stomach, that something which looks somewhat like them might end up a large, enduring part of the landscape.
I don’t buy it, but some smart people do.
I look at arguments like Wise or Cash App and think “Hmm, there is an excellent argument that these should interoperate and slow databases don’t necessarily need to be part of that interoperation. Plausibly it might even be so obviously true to society that they get mandated to do it.”
Professional niches have – time immemorial – built their own special dialects and languages, sometimes using this as an identification sign of individuals that are members of the group rather than outside. This is most obviously identified with the concept of jargon, often with a tinge of negative connotations, but there’s more to that. Some of it is composed of metaphors, becoming closer to a form of poetry than technical speech. It’s memes built on shared professional background that can convey significant amount of information, as long as they are used intentionally, and to an informed audience.
I regularly find myself reaching out to these figures of speech, sometimes making them up myself. But I fear that these are often opaque to newcomers — I do not want that to be the case, as I think we already do enough disservice to our future colleagues, and so I decided to try and define more of this “language” to be approachable by those who don’t have yet the abovementioned shared professional background.
To make the points stick, and also inject some smiles in what may otherwise be dry technical content, and on suggestion of my friend Alex, I commissioned some art to go with the various concepts I’m about to dive into. The vignettes are the awesome work of Furryviza, who I will totally vouch for to illustrate complex concepts based on a rough description!
Yak Shaving — I mean, someone will have to do it, no?
This term is particularly well known within bubbles, to the point that Google Dublin has enshrined it in their faux-pub microkitchen The Shaven Yak, but it’s a general industry term; even American Express uses it, sourcing it back to MIT in the ’90s. The semantics of the term tend to drift particularly between groups and teams, but I personally use it to refer to tasks that hold a lot of dependencies around them, most if not all of which are not in scope of the original task but are tightly related to it.
To give an example, say you’re working on a new command line tool to automate a business process, and you end up using a common argument parsing library. The library may already have the ability to parse dates, but as you use it, you realize that it fails to accept ISO8601-style day references, so you may want to go ahead and fix the library to accept them, and write some tests for it. You may also find that the interface you’re meant to call into has some rough edges that mean you either need to apply pre-validation of the input on your tool, or you may want to extend the validation to prevent inputs that would lead to failure cases. And when you’re about to deploy this to the users’ workstations, you figure out that the deployment tool has been misconfigured and is attempting to deliver full debug information on laptops with limited disk space, so you spend half a day trying to fix this up.
This is just a made up example, mixing a number of different situations I pretty much did find myself in, but replacing some of the actual obstacles. It’s not dissimilar from the problems I dealt with while fixing packages in Gentoo, but it definitely goes well beyond these situations. If you want a more blow-by-blow telling of how yak shaving becomes part of one’s professional life, you may want to check Danila Kutenin’s post on std::sort — it starts with the premise of the task at hand (improving std::sort performance) and goes into details of a number of problems identified along the way, that needed to be addressed before the task could be completed despite, by themselves, not affecting the performance of std::sort at all!
A push back that I heard before about categorizing such work as “yak shaving” is that it is solving real issues. That’s usually coming from people who have been told that yak shaving is a pointless exercise, and done only to keep oneself entertained or look busy. I disagree with this take, as I refer to yak shaving when addressing actual issues, just not issues that got in the way of someone else doing their job before. They often are to be found in someone else’s service, project, or backlog — just never important enough for them to spend their time on.
Yak Styling — Because Sometimes You Just Don’t Have The Time!
Yak shaving is often a controversial activity. From one side, “good engineering practice” (as a number of current and former colleagues would call it) demands you solve issues at source and for good; from the other side, particularly in a corporate environment, it’s very difficult to justify an “infinite” amount of side-tasks, especially on others’ turfs. Different managers have different thresholds for how much time can be spent on yak shaving, but I have never found a manager who’s explicitly happy to leave an engineer shave away to their heart content (or should I say shear content? Ba-da-bum-tssh!)
And that’s the reason why, back when I was at Google, I changed my internal job title to Yak Stylist (“of the Typing Pool” — but that makes sense basically only for the Dublin SREs of the time.) The idea is that sometimes you can’t actually shave the yak you have in front of view, but you can at least make it look presentable, give it a few plaits, so that it’s just not that unruly.
In the contrived example above, I already sneaked in a possible “styling”: if you’re building a client for a service, and the latter does not provide enough input validation, it might be faster to apply the validation locally, rather than going out of your way to fix the service, which might be maintained by a different team. A typical approach of styling over shaving is to make sure that any identified problem is not simply worked around, but also documented — that’s the difference between ignoring the issues you encounter and doing something about them.
Sometimes you can’t just style away problems though, and shaving the yak will make your work easier, cleaner (or at least less messy) — or it might just a matter of making yourself happier. I personally find myself shaving more than styling if I’m still pondering for a good solution to my problem, or if I’m spending most of my time in meetings organizing work for others, and I want to feel more satisfied with myself and my work.
Striking a balance between which work to “plait” (file an issue with a project, document a known pitfall, …) and which work to “shave” is something I still struggle with at times, and can only recommend people try to define between themselves and their managers (because, at the end of the day, they are the ones writing the performance reports!)
A Rabbit Hole With A Yak At The End — Yes, It’s A Trap
(Or, if your workplace is less strict about easily misunderstood terminology, you can refer to as a “yak hole”.)
There’s a relatively famous clip out there from the TV show Malcolm In The Middle, where the father of the protagonist comes home to find a blown light bulb, going to pick up a spare only to find the shelf in which the box sits to be broken, then the drawer with the tools needing to be oiled, the can of WD0-40 empty, and finally the car not starting.
Some people think that’s yak shaving, but it is rather rabbit holing (or ratholing depending on how much of a negative connotation you want to apply to it.) The problem with that is one thing leads to another, and you end up down the rabbit hole, in a very Carrollian way, to fix different problems that have not quite much to do with your original task. The difference is fundamental in my opinion: yak shaving is about many tasks, individually unrelated, all connected to a core task, while rabbit holing is about many tasks, connected to one another, starting from a core task.
The reason why we tend to use two different terms for what is effectively the same activity is related to the spin you put on your work. By experience, it’s easier to spin it into a rabbit hole when the original task really cannot be completed without going down to it, while the rat hole is the connotation you (or your manager) may use for a long series of tasks that may need doing, but did not block you from completing your core task.
If you take again the clip above, we never get to see whether there is a working lightbulb in the box of spares — and that would be the signifier between the two: if there is, the task (changing the lightbulb) would have been completed before setting off to a new task (fixing the shelf) — if there isn’t, you could justify attempting to fix the shelf or oil the drawer, since the car would still be needed to buy a new box of bulbs.
There is a third type of “hole”, which I refer to usually as “a rabbit hole with a yak at the end”. These are usually chains of tasks where you end up with what starts as a reasonable chain of dependencies between tasks, often within one’s team or organization scope, only to eventually end up with a much bigger task than you started with, that involves multiple (or external) projects or teams.
To give a concrete example of this from some time ago, I was working on a team that maintained an automation framework, which a number of separate teams depended on. I was making some changes to an API that was an easy way to cause yourself pain, but after landing my change, I got a report that I broke automations for one of the critical teams we supported. I fixed the issue, but then wondered how much test coverage the implementers of our framework had, but to know that I had to enable coverage reporting. Once I enabled that, a few tests, including for another critical team started failing, leading me to discover that we had been assuming certain code was not just tested, but in use, while it never worked to begin with, and the test only passed because a Python test missing a if __name__ == "__main__" block in Bazel always passed.
I could follow most of the rabbit hole to that point: adding coverage metrics for the users of our framework was clearly in the remit of my team. Removing the never-actually-working code was a worthy cleanup to undertake. But fixing Bazel to be more sensible in how it handled Python unit tests? Heh, yeah that was definitely not a yak for me to shave — I could give it a couple of plait to make it easier for someone with more interest: I filed an internal report about the problem, and provided a workaround (enable coverage reporting, that execution mode failed the incorrectly-written test), as well as made sure that all of my team’s supported tests were written correctly (spoilers: they weren’t — but that’s a different rabbit hole, now!)
Rake Collection — Only You Can Prevent The Next Face Smash!
I have written about rake collection before, which means I won’t be going into intense details over this, but I think it’s an important topic to add words to.
This metaphor is (mostly) my own, although inspired from a tweet by Matthew Garrett. If you consider the obstacles to get your job done as rakes (which, as Sideshow Bob knows, are extremely painful to step on!), a good engineer should be collecting the rakes they found around, rather than throwing more of them around.
The context was in particular related to the concept of seniority – experience bringing the ability to notice rakes without necessarily stepping on them – but it doesn’t have to be limited to them. Any contributor that just stepped on a rake and smashed their face should consider taking that rake off the floor — and should be rewarded for doing so.
The “rakes” can take many forms. They may be Tanya’straps, which are more often found in tools and interfaces that can be confusing or difficult to use, or maybe related to processes, or organizational patterns. It might be a confusing log message that points you in the wrong direction, an alert that misdirects you, or the need to go and request a director sign-off on a resource request to complete a routine operation.
During my career I collected dozens, if not hundreds of rakes. They ranged from fixing hyperlinks so that you could reach the right service in one click (rather than having to backtrack to find the right name to use), to take ownership of a core library to make sure that the thousands of callers don’t have to manually check if starting up their RPC server succeeded.
As I already noted, seniority affects your ability to notice these rakes — to extend the metaphor, through experience you also gain the ability to grab the rake before it hits you in the face and breaks your glasses. With this I mean that the more you work within a certain team, organization, company, or industry, the easier it becomes to get rid of the problems that more inexperienced contributors would be facing.
Let me talk about this in the context of more operational teams. In my experience, most teams will end up with a number of “legacy alerts” that members of the rotation who received it before know how to interpret just by its title. The title itself may not be an accurate description of the problem, but by experience you end up remembering that when that particular alert fires, it’s likely that a completely different system got stuck. If the alert comes with any documentation, it’s very unlikely that the documentation even points at the right place, and if there are links to dashboards, it’s probable that they don’t even exist anymore.
When a new contributor joins such a rotation, it’s very likely that they will take the alert to face value, and spend time trying to figure out what it means, and why it’s firing, and where did the dashboard move, and so on. Experience may make the difference between spending hours debugging a problem in the wrong system, which then escalates to a full-blown incident, and realize that the alert is misleading. At that point, fixing the alert title to be more meaningful, updating the documentation to point at the right system, and possibly remove the broken dashboard link (or replacing it with a more relevant one) would be collecting the rake.
Diagonal Contributions — I Help You All, But You All Help Me!
This is another topic I wrote about in the past. And while the rest of the idioms apply to the industry in general, no matter the size of your reality, this is one of those that only really make sense at bigger companies, since in a small five people startup, every contribution is effectively a diagonal contribution.
In my experience at least, most big companies end up with teams that can be broadly categorized as “product” or “infrastructure”. This is not a perfect categorization, obviously, because you could make a product out of your infrastructure, or you could build infrastructure to support multiple services in the same product. Indeed, this type of tension is often (again, in my experience) the source of many re-orgs.
When working on a product, or product infrastructure, team, most of your work is oriented at improving the product (“vertical”) while on an infrastructure team, your work is likely to ensure that the internal users are satisfied (“horizontal”). This should make it a bit more obvious what I mean with “diagonal”, then: from the point of view of a vertically-focused team, it’s about making sure that the work applies across a range of (internal, usually) users, while still advancing the needs of the product itself.
I needed this metaphor to describe some of the work I accomplished a few years back, and explain to those evaluating it why I didn’t just solve a problem in our tooling, but went out of my way to fix the underlying framework to do the right thing for everyone in the company. From an engineer point of view, this may sound trivial, but the truth is that your career depends heavily on how your manager, and their peers, perceive your impact and your work.
Indeed, unlike the other idioms I’m talking about here, this is almost exclusively spin, and can be applied to any of the others. You can “collect a diagonal rake”, like I did when I decided to fix all of the RPC servers not to silently ignore a failure of opening their listening ports, rather than just address it in my own team’s code. You can “shave a diagonal yak” by when you spend spare time to address a number of blockers to improving a core library that your team uses but doesn’t maintain.
I have to admit my track record of convincing management about the importance of my work based on this particular idiom has been… spotty at best. I still believe in the concept, though. In a healthy organization, I see that the contributions outside of one’s own specific team are generally celebrated and rewarded — particularly if discussed and scheduled with the stakeholders at the beginning of the work.
So, What? Are The Yaks Taking Over?
Groups, organizations, and industry will always come up with new idioms, and new jargon. It’s shorthand to point people in the general direction of something they have seen and dealt with in the past already. Codifying more of these terms, describing them and opening their meaning to non-members of the group is in my opinion a necessary step to allow more access to an industry that has, for good or bad, taken over the world.
The terms I’m using here are not universal — particularly not for those that I ended up coining myself (such as yak styling and rake collection), but I have had good luck with having them adopted by my teams. They don’t replace “corporate speak”, particularly as used by upper management, but I find them an important stepping stone to build the shared context that makes work easier.
And if you’re wondering why am I particularly fixated on yaks to explain concepts related to my work, the answer is that I don’t think that professionalism needs to be dry and serious all the time. Injecting a level of humour in what is otherwise a fairly boring description of design, discussion, coding, and evaluation work is part of what makes me happy to keep doing my job.
Try it, and let me know if it made the conversations with your peers less awkward and more fun!
I’m a bit of a geek. OK, more than a bit of a geek. I double majored in Japanese and Computer Science, used to run a WoW guild, etc etc, I could be pictured next to “geek” in the dictionary. I sometimes worry about this coming across too strongly in professional spaces and sometimes want to just geek out. So it gives me enormous satisfaction when one of my geeky hobbies intersects my professional interests.
Recently, I got back into fantasy miniature painting, after a 17 year hiatus. (Quick plug: YouTube is the best thing ever for people getting into a hobby or profession which involves tacit knowledge. Lyla Mev did more in 7 minutes for my painting skill than all of my previous practice. You can lose many, many hours watching extremely talented people break down exactly what they're doing, and watch high-def video translate sorcery into replayable, replicable hand motions.)
What was once a traditional IP-rich industry (a great writeup by Byrne Hobart on Games Workshop can help non-geek readers get up to speed on it) has had disruptive innovation happen through a coalition of Shenzhen manufacturers, bicoastal U.S. tech companies, mom-and-pop factories throughout the Western world, and a global network of small firms producing mass-customizable sculptures.
Let’s start with this from what the end user sees and then trace the supply chain backwards.
(Disclaimers as always: while I’m not directly conflicted with any company named below, some of them are likely clients of my employer, Stripe, and I may have purchased their wares recently at standard retail prices. As always, all views here are my own, and nothing is non-public knowledge.)
“Your high elf wizard’s cheekbones are not quite what I imagined for mine”
Some people paint fantastic miniatures purely for the aesthetic value, but the dominant use of them is to play games. Depending on the game, there may be an in-game benefit due to painting, but I’ll spare you that rabbit hole; suffice it to say that some hobbyists who spend a lot of time in a fantasy world get attached to their avatar(s) and want them to look both good and the way they think they should look.
This provides a challenge for traditional manufacturing industries, because the number of SKUs a game retailer can stock is finite and the number of D&D characters is not. Even for just minimal correctness on “vaguely appropriate for a gender/race/character class” there are over a thousand combinations possible, which already is more SKUs than a hobby store can afford to stock for this offering, and players sensibly demand that their character not look generically like every other high elf wizard in existence. Some want the character a bit plumper, some want them in a wheelchair, some want them holding the magic item they strived for in the campaign, etc.
Historically, the license to manufacture D&D miniatures was held by Wizkids, which produces a few hundred different SKUs at any time. Wizkids has a model not too dissimilar to Games Workshop and has a similar production function. They hire artists directly or on a contract basis to sculpt elven wizards. They arrange for a factory in, without loss of generality, China to produce a large number of copies of that single wizard. They send those copies to distributors. Retail stores order from distributors, mark up their wholesale price of of the wizard by about 100%, and sell to the public.
Then the Internet happened, and then some improvements in plastic manufacturing happened, and then the Internet happened again.
Today, if you go to e.g. Etsy, you will find more plausible high elf miniatures available, for about $5-10 each plus shipping (about the retail price point for single miniatures), than Wizkids could create in a corporate lifetime, with hundreds more being released every month.
Consider, for example, ScatterMaster’s rendering of Amlund Mageon, a wizard who bears a striking resemblance to many depictions of Gandalf, in a bit of remixing extremely well-known in the hobby and by Tolkien as well. A hobbyist unfamiliar with this might think that ScatterMaster is a truly talented sculptor or has hired one.
ScatterMaster may be an artist in their own right, but relevantly to a transaction on Etsy, they are a small U.S. based manufacturing firm which has licensed the rights to print Amlud Mageon for profit from a French sculptor who does business as Galaad Miniatures. The terms of the license is simple: ScatterMaster pays Galaad ~$30 a month for a license to reproduce any of their sculptures in any quantity as long as it is strictly 3D printed only.
It is a non-exclusive license (and that's why the artist enjoins the manufacturer from producing it with a technique amenable to industrial scale; to maintain the value for many licensees rather than reducing Mageon to his weight in plastic). Dozens of firms are paying Galaad the same $30 a month, along with a hundred-odd individual hobbyists.
Contrary the starving artist cliche, Galaad receives approximately the median French salary from Patreon (virtually) guaranteed at the start of every month, and to earn it the next month they just have to crank out another 8 sculptures to add to their library.
They can (and factually do) earn more with taking commissions, running Kickstarters or similar where they provide (or arrange for) their own fulfillment, or license STL files a la carte. If you really like Amlud Mageon, own a 3D printer, and wanted to print him yourself, the 3D “source code” is available by Galaad on MyMiniFactory for $5, of which Galaad gets to keep $4.50.
The average hobbyist does not own a 3D printer, and specialization in the hobby (and hobby-adjacent artisanship) is what enables ScatterMaster and Galaad to both have businesses. The small-scale manufacturer fronts the capital for maintaining (probably) a printing farm producing miniatures as fast as orders can come in, and does both the labor required to turn semi-toxic resin into pretty figures and also the entrepreneurial application to (basically) make sure they show up on top of Etsy and Google for searches for “wizard miniature for D&D game maybe a bit old and wizened?” The artist gets to largely outsource per-order customer service questions, fulfillment, owning and operating a printing farm, the complexities of international trade, etc. They can focus on marketing to end-users (and perhaps more importantly to potential manufacturers) and sculpting miniatures which fill the holes in their users' product lineups or tabletops.
There are many other firms one could name here, on both sides of this market. Some of them are strictly mom-and-pop manufacturers or individual designers. Some are small but thoroughly professionalized firms, like Loot Studios in Spain, which has an in-house painter and video crew to produce marketing collateral showcasing the work of their team of sculptors to potentially licensees.
And interestingly, in sharp contrast to most global plastics manufacturing, it is extremely useful to this supply chain that both the final producer and final consumer are in the United States, because it enables low-latency low-cost shipping. Most plastic you consume originated in China and got to you via, at one point, a rather slow ship, and the latency between the factory and your door was on the order of several months not a few days. Most plastic doesn't particularly care about the delta of a few months or staying in inventory, but a malevolent lich has a tight schedule to keep or he might be dead (permanently, this time) before his physical instantiation arrives on the tabletop.
But there is still a supply chain implicated here in China.
Shenzhen is eating the world
There are broadly two types of 3D printers in common use. One uses thermoplastic filament sourced from a spool and extruded through a heated nozzle attached to a gantry with three axes of motion to build a printed object from the build plate on upwards. This was the first widely commercial available 3D printing technology for home or small business use, and while it has a lot to recommend it for many applications, it did not take off for the miniature use case.
The other type of 3D printer is a resin printer, which are a technological marvel of chemistry and hardware design. If you want a full technical explanation I recommend the Youtube-delivered seminar Ph.D Chemist Explains 3D Printer Resin by a fellow painter who also found his professional life colliding with his hobby unexpectedly.
3D printer resins are liquid photoplastics; they cure (harden) in the presence of UV light. An LCD screen beneath the transparent bottom of a vat of ooze exposes a layer 30-50 microns thick to harden it; a single-dimensional screw then rotates to pull the build plate upwards (to remove it from the film) then downwards (to get more liquid resin stuck to the newly-solid layer then re-adhere to the film). The process then repeats until the print is done, in something a calculus teacher might describe as integration by parts and that a timelapse videographer might describe as absolutely mesmerizing.
And here physics and calculus made an unplanned splash into the economy for printing 3D fantasy miniatures. The thermoplastic option has build time increase linearly with the weight/volume of models printed. The photoplastic option does not; build time increase linearly only with their height, but the other two dimensions are free up to the limit of your printer’s build plate.
Recall that printed miniatures are numerous, extremely valuable relative to their weight, and… very, very short. So you could, with a prosumer-grade printer costing only a few hundred dollars, print miniatures with a retail price equal to half the cost of the printer in a single print run lasting approximately two hours and requiring on the order of $3 of expendables (resin, cleaning materials, paper towels, and gloves) and less than an hour of skilled labor.
Intel would kill for the ROIC here, is what I am saying.
So where does Shenzhen come in?
The many faces of ChiTu
There is a thriving market in 3D printers both for industrial applications and for pro-sumers, with price points for the later offering clustering in the $300 to $1,500 region. They are largely made by a small cluster of companies which, unlike most manufactured Chinese products by tonnage, are consciously branded, with global marketing campaigns and customer service. Famous names in this sector include Elegoo (based in Shenzhen, short for “electronic googol”, a reference to a bit of math geekery which has proven auspicious in the tech sector before), Phrozen (based out of Taiwan), Anycubic (back to Shenzhen), and similar.
Much like paper printers, it turns out there is a flowing river of money in the resin printing business in rough proportion to the economic utility of all printed things. Most of that money is literally liquid. The hardware business is an extremely rough one to be in; my quick estimate of the BOM (cost of included materials) for a low-end pro-sumer printer is about half of the purchase price. But the printer literally exists to drink resin, and resin can be optimized and branded. Paper printers have fought an unending war against third-party ink toner, which needs to solve a difficult but tractable problem of repeatedly transferring pigment to paper. Resin, on the other hand, exists in an 8-dimension product space before you even start seriously thinking about it; if (for example) it doesn’t cure effectively on the wavelength of light your machine outputs then it is virtually useless to you.
This results in the printer manufacturers getting to price their resin at a premium and keep it, while maintaining a commanding share of kilograms consumed by their installed base, rather than being predated by third-party resin. Resin is sold in 500g to 1kg bottles at price points in the mid tens of dollars; it is produced in multi-tonne increments and likely at 98%+ margins. The chemical and economic properties of it make it basically ideal for putting in a shipping container even when the things it will be used to make would find that container a terrible economic fit.
And so the printer manufacturers likely make most of their money from selling consumables, primarily resin but also replacement parts for the machines (the physics of them currently invariably cause disabling damage to key parts over time, the LCD screen will eventually burn out, etc).
The printer companies are extremely sophisticated at marketing, to a degree which would be notable in almost any industry. They all operate first-party online storefronts and e.g. Amazon stores in their key national markets, and they aggressively promote themselves via e.g. ensuring that widely-watched hobbyists like Uncle Jesse receive review copies of their printers and all the resin that they can disclose. (One could write another essay just about the information economy for printer/resin/etc buying decisions; it’s a lot of the affiliate links, starred reviews, influencer marketing, online advertising, Amazon gaming, and similar you would expect.)
And inside substantially all the resin printers… is one board to rule them all, brought to you by ChiTu. ChiTu has a mortal stranglehold on the software and hardware that turns exported 3D designs into a series of tightly coordinated instructions for the motor and LCD screen. The printer manufacturers can all source final assembly and commodity electronic and mechanical components, but (in this segment) are not able to build the board yet.
ChiTu has vertically integrated itself from the embedded software segment up the stack to the slicing software as well. Slicing software is a requirement from 3D resin printing; it helps the operator turn an artist’s sculpture into something that can be physically realized on the hardware they operate. The most basic function is chopping the sculpture up into a scripted series of 30-50 micron slices (hence the name). It is also used to e.g. add additional planned plastic to support the structural integrity of the print. A fairy needs to hang upside down from a build plate fighting real-world gravity while perhaps being supported by a gossamer dress and the dreams of an imaginative child. Tolkien didn’t have to worry about that; operators do, and so you both have to put on (in software) and take off (with your own carefully gloved hands) plastic supports.
This is another reason why the mom-and-pop factories pumping out prints have an economic reason to exist: there is substantial skill involved in correctly operating one’s slicing software, one’s printer, one's post-production process to produce a model which is realizable in the physical world and survives its rigors in an aesthetically pleasing fashion.
(I have three scaly 30mm lizard rumps attesting to the fact that dragons might be vulnerable to arrows of dragonslaying but are absolutely destroyed by face-first contact into curing resin if their vicious claws can't stay attached to a milled aluminum build plate. Each of them cost me hours and a thorough cleaning of the printer to prevent the congealed puddle of dragon from penetrating the tank’s film and drenching my WFH desk in mildly toxic resin. Clearly the economically rational thing for me would be to procure my dragons from Etsy, but this is more fun than I’ve had in ages.)
Anyhow, ChiTu has started forcibly integrating ChituBox, their slicing software, with the leading printer manufacturers, and they have them over a barrel (of resin). The manufacturers hate going to their customers and saying “The hardware you buy from us now comes with a we-hope-this-won’t-be-mandatory $169 a year subscription”, particularly as they get to keep 0 cents of that subscription. (There exists competing slicing software, most interestingly the Belgian/French Lychee, but they don’t manufacture the baseboards).
So if you are following the money:
An end-user buys something from a small manufacturer on Etsy or a similar marketplace.
That manufacturer made a capital investment and OpEx purchases from a printer manufacturer. They have ongoing non-exclusive IP licensing agreements with multiple artists, brokered by software/financial services like Patreon and MyMiniFactory.
The printer company buys their boards from ChiTu, which (separately) may charge the plastic manufacturer an annual software subscription.
This is an impressively multilayered, global supply chain. Prior to buying my own printer, a single miniature routinely involved a manufacturer in Utah, a printer (and resin) from Taiwan, a board from China, a sculpt from Spain, and logistical and financial knitting by Etsy (East Coast, USA) and (multiple uses of) software-heavy financial platforms (West Coast, USA).
It's useful to ponder why the money movement here is more complicated than simply "banking" or "payments." The models themselves look like a classic e-commerce transaction, but their supply chain implicates a high-volume aggregated-then-distributed subscription commerce service needing to make multinational pay-ins and pay-outs. That is something that you can't conveniently get from either a bank or from the credit card networks. And the capability to offer this allowed entrepreneurs to create a business model which looks very little like existing firms; it is neither vertically integrated nor does it recapitulate the supply chain for most physical items with embedded IP.
Kickstarter: Marketing and capital stack for B2B art producers
Another wrinkle here: we’ve discussed the operation in steady state of an artist, plastic manufacturer, and similar, but cold starting an artist on the subscription model is almost as difficult as cold starting a SaaS company. As you might expect, VCs are not exactly wowed by the pitch “I am going to sculpt a lot of ogres; the IP will eventually be worth tens of thousands a month” and many artists cannot afford to bootstrap themselves into having a library worth licensing.
Enter Kickstarter, which both has an impressive amount of reach in hobby-adjacent spaces (people who enjoy board games have written more about the mechanics of board game Kickstarters than you could read in a week) and critically can raise I-can’t-believe-its-not-capital in advance of the IP being fully printer-ready.
And this itself has created a thriving little ecosystem of software firms! And enables advancements into more capital-intensive manufacturing!
A worked example: Dragon’s Hoard (the editor in me might suggest "Horde" instead since if a gamer wanted the pile of gold and gems a dragon slept on they would find it from a separate sculpting firm) is a set of miniatures which had a successful Kickstarter raising $250k. The campaign's proof-of-competence was enough beautifully sculpted goblins to demonstrate likelihood of success, but probably not enough to sustain a firm via the above-described ecosystem. This both paid for years of artist time and also a substantial amount of fulfillment work, because they are selling miniatures and not computerized descriptions of miniatures.
The physical properties of the models they describe are not easily achievable by the production processes used by traditional miniature manufacturers or the resin 3D printing ecosystem. They suggest to me that they might be adopting SioCast, which has the wonderfully evocative (to industrial engineer)s tagline “the link between 3D printing and injection molding” and which has to be seen to be believed. SioCast, if it is widely adopted, would be another quantum revolution in miniatures manufacturing, because it combines the (very high and low-marginal-labor-cost) scalability of traditional injection molding with the (very low) setup costs and lead time of 3D printing.
And while that might not make all that much difference for the long tail millions-of-SKUs that the hobbyist market wants, it makes a huge difference for e.g. wargamers, who need hundreds of models and can tolerate many of them looking substantially identical. You could imagine the Games Workshop of the future shipping new boxes of models directly to subscribers (and perhaps or perhaps not to hobby stores) on a monthly cadence, not on an annual-or-longer refresh cycle, basically as soon as their artists could sculpt them.
Anyhow, that is my guess at why it costs about $5-10 for a single artisanally produced fantasy figure via 3D printing but Dragon’s Horde is pre-committing to more than 100 for less than $100. They have successfully re-achieved economies of scale in plastic production, but at a scale more easily obtainable than in traditional plastic production, with a product which is substantially better for purpose than the one produced by traditional techniques.
Kickstarter apparently leans into their marketing and I-can’t-believe-its-not-capital-raising side but largely punts on fulfillment, which has caused a flourishing of boutique software-and-financial-services providers called “pledge managers” to step into the gap. A pledge manager is something akin to a CRM crossed with Shopify crossed with a physical fulfillment workflow. It lets backers pay for their (country-specific) shipping costs once the product is actually ready for delivery, lets the campaign not drown in Excel while managing thousands of orders, and it sell “late pledges”, which are post-Kickstarter-campaign purchases of the product. You could see Dragon Hoard’s here on GameFound.
And, playing forward a little bit, these Kickstarter campaigns buy artists/designers/entrepreneurs enough runway to produce assets which have enduring value, both the IP created and the customer relationships and reputation within the community. Reputation both leads directly to more customers and indirectly to more demand for prints of things in your style/range on e.g. Etsy, which causes manufacturers to license your IP. This causes a virtuous cycle not entirely dissimilar to the economic engine behind Disney or Marvel, but heretofore unknown in artisanal scale fantasy sculpting.
HeroForge: AI eats enables amateur sculpting
If you’re paying attention to the intersection of artistry and AI you might have seen it roiled by Dall-E, a language model which spits out plausible-if-vaguely-hallucinatory visual artworks in a variety of styles. Many commentators have opined on whether or not this will endanger the jobs of visual artists.
I tend to think that artists, like writers and programmers, generally succeed via achieving symbiosis with emerging technologies rather than being supplanted by them. Every programmer you’ve ever met is utterly dependent on an AI to do their work; we just call it a compiler or interpreter. The principal reason that the programmer/interpreter symbiote is not classified as an AI is that a book describing their operation isn't filed under science fiction but rather history or current affairs.
So imagine you’re someone with an idea for a fantasy character but not the time or sculpting skills to chisel them out of the ether. You might use HeroForge, a truly remarkable technical achievement which is essentially an in-browser CAD software which specializes in fantastic characters. Their software is free to use. The business model is that you can export the character you design to either a 3D printing business located in China (for about $20-$45 plus shipping, depending on whether you want it colorized) or to an STL file for printing on your 3D printer (for $8 a la carte or ~$3 if you want to commit to a monthly subscription).
The results are… impressive. (See below.)
And despite the fact that they offer almost uncountably many models which would be at home in a D&D game, they do not in fact result in technological unemployment of fantasy sculptors, who are enjoying the best market for their skillset in history. A moment's thought will answer why: HeroForge is extremely good at producing a beautiful model which will look to all cognoscenti like it is a HeroForge model. Something of the job-to-be-done of a fantasy miniature is looking like you belong in the world with your friends but are nonetheless very different from them. In this, they are not dissimilar to fashion (or art generally).
Bringing it back to the real world
And there you have it: that’s the geekiest combination of chemistry, hardware/software, and international finance to have recently colonized my desk. I find these sort of deep dives into tiny parts of the real world are good in providing context to the broader forces which are reshaping society, such as globalization and increasingly software-mediated economies. If you’d like to see more (or less) of them, let me know. I promise that the next one won’t be quite this geeky.
To echo a line from Stripe's annual letter, we're still in the earliest stages of the broadly participatory cultural (and economic) dynamism unleashed by the Internet.