839 stories
·
8 followers

Nostalgia over blogging vs the current social media hellscape

1 Share

A sentiment just crossed Fediverse recently, which is in the vein of "RSS was peak social media, change my mind". The original post was from https://hachyderm.io/@Daojoan@mastodon.social and is quoted below:

RSS never tracked you.
Email never throttled you.
Blogs never begged for dopamine.
The old web wasn’t perfect.
But it was yours.

https://mastodon.social/@Daojoan/114587431688413845M

I was there for the rise and fall of blogging, so the rest of this post is me over thinking this particular post.

RSS never tracked you

RSS never tracked you the way HTTP or HTML never tracked you. What tracks you with HTTP/HTML are rendering engines for HTML and Javascript. I can say with absolute certainty that tracking-tech during blogging's height was used to track audience, click-through rates, and other site-engagement metrics. RSS was the loss-leader for clicks on sites. This particular post uses one of those tricks; an opening paragraph promising more, which if you're reading this through RSS you will have to click through to read. Tracking RSS feeds was done through tracking pixels back in the day because you couldn't trust Javascript to run in the RSS readers but HTML rendering generally worked.

Email never throttled you.

Email absolutely did, absolutely. The problem with spinning up a self-hosted newsletter service in 2025 is getting your IP reputation good enough that the big mail-box vendors (Google, Microsoft, Yahoo/AOL) let you deliver. This issue was in its infancy in 2005, but was an emerging problem as IP reputation became clear as a low-cost first-pass anti-spam technique. Mailing list operators ran into this all the time back in 2005 as various subscribers moved behind security appliances doing IP reputation.

Blogs never begged for dopamine.

On a factual basis, this is false. Sole operators like myself lived for comments, that dopamine hit from people liking what I wrote. If I couldn't get that, I'd enjoy reshare statistics (see the first point about RSS tracking). Many commercial blogging platforms even created RSS feeds for comments on specific articles to make it easier to keep up on discussions. If that isn't dopamine, I don't know what is.

The old web wasn’t perfect.
But it was yours.

True, to a point. I tagged this article blogger because that's what I hosted this blog through for most of its first decade.  You know, a centralized blogging platform akin to LiveJournal or Dreamwidth back then, or Medium and Substack today. I didn't own the platform. After I moved to my own domain and Movable Type, this was true. And yeah, it wasn't perfect but it was most definitely mine.


There is another layer to this post beyond the simply factual, and that's a critique of platforms. Blogging in its heyday was perhaps the last major gasp of the Old Internet, where a bunch of hobbyists create something beautiful and widely adopted, which then gets enclosed by commercial interests. Blogging was absolutely not centralized. That lack of centralization means there was no unitary profit motive looking to drive engagement to increase sales, those were limited to individual sites.

Blogging's decline is traceable to two big trends:

  • Google killed Google Reader, which was the dominant RSS reader by a long shot. This death forced a bunch of folks to look for alternate platforms. My stats show my readership plummeted over 50% on death-day.
  • Twitter and other early social media provided a much shorter dopamine feedback loop than then blog-publish-comments loop.

The fact that Google Reader's death functionally killed RSS-based distribution is proof that blogging was already beginning to centralize. Google couldn't control production and distribution, only engagement; and the real money was in controlling all three. Google wanted what Twitter had, which was the entire production, distribution, and engagement framework on the same site with the same owners and tracking infrastructure. Once they had that, start engineering dopamine feedback loops to improve stickiness and engagement; and we have all the algorithmic and dark pattern pathologies we know and loathe today.

Modern internet users have been trained for a decade in a half to expect social media to involve a single, or small number of sites to do everything, and individual posts are short enough to deal with while waiting for a bus or the kids to walk from school to car. The old blog+rss model simply can't compete with this, and better fits as attention-competition to a given user's news media consumption. Medium and Substack both offering paid subscription options for individual blogs is proof that news media is the competition to blogging, not social media.

The nostalgic exhortation to set up a blog and distribute through RSS is in the same vein as calls to return to IRC and dump Slack/Teams. Some people/groups can do that, but the platform features are different enough that the old tools don't feel feature-complete and that lack nudges folk back into the commercial walled gardens.

Read the whole story
Flameeyes
2 days ago
reply
London, Europe
Share this story
Delete

Pressure makes diamonds

1 Share

(Like this article? Read more Wednesday Wisdom! No time to read? No worries! This article is also available as a podcast).

People who are getting on a bit in years (like me), like to discuss how nothing is like it was anymore. In that line of conversation, there is a fine distinction between outright complaining and a potentially useful analysis of changes in society and how these changes impact the things that are happening. Last week, during a dinner with my former manager at a big tech company (who is also past the target age for an AARP membership), we started discussing how we got into the field, how that was different from the situation people find themselves in today, and what the consequences of that are.

When I started getting interested in computers, everything was an uphill battle in all directions: Computers were scarce, they were underpowered, and they were expensive. There was little information available and what was available was difficult to get and, again, expensive. I regularly bought computer magazines and tried to get my hands on any and all books that discussed computers and programming. It was also a very lonely endeavor, as I knew almost nobody who knew anything about computers.

Then, when I went to college, access to computers, books, and people improved significantly, but it was still not a panacea. For instance, during my first two years at college, we had to reserve time slots to use the terminals of our school’s mini-computer. Bookings were for 30 minutes time slots you could have at most three slots outstanding. Yes, you read that right, we could have a max of 1 1⁄2 hours of terminal time reserved up front! When that time was spent, you had to go find the reservation terminal and then you could book another 1 1⁄2 hours, subject to availability. In these circumstances, when you get to sit down in front of the terminal, you better know what you are going to type in.

Additionally, the Pascal and COBOL compilers ran on a batch system overnight, so you had to carefully plan a ½ hour slot for the next day to see if your program had compiled, and if it hadn’t, make any corrections and resubmit, so that you could get another attempt at maybe running it the day thereafter. As you can imagine, this system did not do wonders for velocity and there was a bonus for getting your work in early in the trimester.

To make matters worse, whenever the college IT department was faced with a low free disk space situation, they ran a program called “DailyReport” which removed “temporary files” from the disk. Unfortunately, “DailyReport” considered student’s binaries temporary files, because they could be recreated from the sources. On at least one occasion, I logged on in the morning to find that my compilation had succeeded at 2am but that DailyReport had come round at 4am to remove my binary. Try submitting your code for a software engineering class on time under these circumstances!

All of that hassle was to get a professional qualification for a job that didn’t even pay that well. It wasn’t bad, but it was not “doctor or lawyer” good. I have written about this before, but when I graduated, I could maybe look forward to a comfortable middle class existence. As a matter of fact, in my first job, my yearly income was so low that I qualified for healthcare under the state’s social health fund plans. Nothing wrong with that, I guess about 75% of the country earned under the threshold for the funds, but it just goes to show that nobody, not even me, thought I was heading for a cushy tech job.

There are some consequences to this state of affairs. First of all, in that environment, only people who really (and I mean really really) like toying with computers get into the field. It’s not just that coding wasn’t cool yet; it was actually difficult and cumbersome. Only people with a deep interest in the subject matter and with lots of passion studied computer science and they had to struggle mightily to become any good at it. But, good they typically became, because: Pressure makes diamonds.

Another consequence of this story is that, tech skills being rare, it was very easy to let it go to your head.

In the small village I grew up in, I knew literally nobody who was into technology. That’s quite lonely as it means that you have nobody to talk to about your passion and it doesn’t make you very popular either. But it also makes you think you might be a wizard. Myths and legends are full of wizards and they are always solitary figures that are holed up somewhere and doing things that nobody understands while speaking strange languages. I was a solitary figure that was holed up in my room all day doing things that nobody understood in strange languages, so I thought I was a wizard. Stands to reason.

This feeling persisted into college. Sure, there were more people there, but in the meantime computer science had become somewhat hip and so I had a fair amount of fellow students (especially at the start of the first year) who did not have my deep pre-college experience hacking with computers. Fortunately, a handful of them did and we hung out together, because it is expected for wizards to form a conventicle, especially in wizard school (forming our own little pre-HP Ravenclaw).

Most of the rest did not make it very far in college. We started the first year with 220 students and four years later, about 60 would graduate. During the introductory program, the professors helpfully said: “Look left and right of you, only one of you will be here next year. Even then, of the people who graduated in the same year as me, I would trust approximately a quarter of them to touch a computer that I care about.

The problem with feeling you are a wizard is that it can lead to arrogance. Being in the possession of secret knowledge and arcane spells, I would look down on the muggles who were struggling to set their VCR’s clock to adapt to daylight savings time or who were trying to tame the dragon of WordPerfect without knowing the incantations and wand movements required to do so.

One day, a couple of guests came in to dine in our restaurant. They were obviously exhausted and my mother asked what was going on. They explained that they ran a small bookkeeping firm and they had just bought their first computer. For the whole day, they had been trying to set it up, install some software, and make it print. My mother graciously offered the services of the local wizard (me). Next day, I went over there and did the needful, showing off my mastery of the arcane “MODE 9600,N,8,1” and the contents of CONFIG.SYS.

Until I started working, I really did not meet anyone who knew more about computers than I did. Not even the college professors. However, when I did start working, I soon found myself surrounded by real wizards who knew much more than I did. This was a humbling experience. It’s not just that they knew and understood things I didn’t know and understand, they knew and understood so much more that I feared I might never catch up…

This is of course the young person’s underestimation of the value of time. Time heals all wounds and it affords compound interest (in both money and knowledge). Steadily grinding over a period of time does wonders.

After my first job, I kept finding people who were much smarter and more knowledgeable than me and I started to thoroughly enjoy their presence. At almost every company or contract, I found people, often older than me, who had knowledge and experience that I didn’t have. Fortunately, I had never become so full of myself that I resented meeting these people. It also didn’t break any fundamental feelings of superiority because, being Dutch, I really didn’t have these to begin with; all things considered, feeling like a wizard was a really thin layer of veneer.

Holland is the country where people say: “If you behave like a normal person, you are already crazy enough.” Dutch parents tell their kids to “Doe normaal!” (Act normal!) It is one of the two Dutch sentences the lovely Mrs Wednesday Wisdom can pronounce. The other one being: “Hand voor je mond!” (Hand in front of your mouth!), which is what we say if someone yawns while providing access to the light source at the other end of the tube that goes through your body all the way to the other end.

When I joined Google, I finally found my place. Here was a company where everyone was a wizard. When people asked me what it was like, I used to say: “Well, everyone’s a wizard and I am the dumbest person in the building”. Being in that exalted company robbed me from any last vestiges of feeling special that might still have been lingering. My colleagues had written books on a range of topics, had authored open source software that everyone in the world used, patented new stuff left and right, invented new (and useful) programming languages, and together we did things that had never been seen before. I cannot lay claim to any significant contributions to any of that, but I am really good at taking other people’s great ideas and running with them, which is a skill in itself. And also, I can grind, which in a world where every success is 1% inspiration and 99% perspiration is also quite useful.

Being the dumbest person in the building means that you experience a lot of pressure from everyone above you. But, you have to realize that the easiest way to get better at something is if you have the examples right in front of your eyes to copy and learn from. It might be a lot of pressure, but pressure makes diamonds!

I kept running into people that looked like the younger me, but “gone wrong”. I once interviewed a young gentleman from South America who had applied for an SRE role at Google. I started the interview with a lowball Linux system administration question, which he answered satisfactorily. After giving the answer he looked at me proudly and said: “Have you ever met anyone aged 26 who knows as much about Linux as I do?” “Yes,” I answered, “everyone in this building.”

He did not make the hiring bar, but he became legendary for asking our lovely receptionist out on a date on the way out of the building.

In the hiring committees we regularly rejected candidates who we felt were probably smart enough, but who had spent too much time being a big fish in small ponds. It is not hard to shine in a dim room and these people had grown complacent and not continued to develop as engineers. I’ll say it again: Pressure makes diamonds, without that pressure you just have some carbon…

The times have really changed a lot. Compared to 40 years ago, there is an abundance of everything you need to get going in the field: Powerful computers are cheap and widely available; there’s the Internet, and, until recently, getting into technology was a surefire way to get a high paying job. Consequently, it appears to me that many people choose a career in tech not because of a passion for the subject matter, but for purely monetary reasons. I do not begrudge anyone that choice though and I totally understand parents who give their children three career options: Doctor, lawyer, software engineer. But I do regularly see that lack of passion and without that passion you do not have the intrinsic motivation to grind and without that grinding it is hard to become good.

I regularly told people who asked how to land a job at Google that the only people who could do that were the ones who had thoroughly misspent their childhood.

Personally I miss the days of pouring over a book on Z80 assembler and trying to make heads or tails of it. I got into this field because it seemed to me that making the idiot box do something useful was one of the greatest puzzles around and it has never disappointed in that sense. Choosing my employers so that I was always the dumbest person in the building worked extremely well for me. As a strategy for selecting your next gig, I can highly recommend it.

The smart people in the building read Wednesday Wisdom, so you should too!





Download audio: https://api.substack.com/feed/podcast/160757610/b7d0f0e19f2c8578404ca5afbb4f1891.mp3
Read the whole story
Flameeyes
52 days ago
reply
London, Europe
Share this story
Delete

Storage, DoGE, and cognitive biases against tape

1 Share

The Department of Government Efficiency, Musk's vehicle. made news by "discovering" the General Services Administration uses tapes, and plans to save $1M by switching to something else (disks, or cloud-based storage). Long time readers of this blog may remember I used to talk a lot about storage and tape backup. Guess it's time to get my antique Storage Nerd hat out of the closet (this is my first storage post since 2013) to explain why tape is still relevant in an era of 400Gb backbone networks and 30TB SMR disks.

The SaaS revolution has utterly transformed the office automation space. The job I had in 2005, in the early years of this blog, only exists in small pockets anymore. So many office systems have been SaaSified that the old problems I used to blog about around backups and storage tech are much less pressing in the modern era. Where we have stuff like that are places that have decades of old file data, staring in the mid to late 1980s, that is still being hauled around. Even when I was still doing this in the late 2000s the needle was shifting to large arrays of cheap disks replacing tape arrays.

Where you still see tape being used here are offices with policies for "off-site" or "offline" storage of key office data. A lot of that stuff is also done on disk these days, but some offices still kept their tape libraries. I suspect a lot of what DoGE found was in this category of offices retaining tape infrastructure. Is disk cheaper here? Marginally, the true savings will be much less than the $1M headline rate.

But there is another area where tape continues to be the economical option, and it's another area DoGE is going to run into: large scientific datasets.

To explain why, I want to use a contrasting example: A vacation picture you took on an iPhone in 2011, put into Dropbox, shared twice, and haven't looked at in 14 years. That file has followed you to new laptops and phones, unseen, unloved, but available. A lot goes into making sure it's available.

All the big object-stores like S3, and file-sync-and-share services (like Dropbox, Box, MS live, Google Drive, Proton Drive, etc) use a common architecture because this architecture has been proven to be reliable at avoiding visible data-loss:

  • Every uploaded file is split into 4KB blocks (the size is relevant to disk technology, which I'm not going into here)
  • Each block is written between 3 and 7 times to disk in a given datacenter or region, the exact replication factor changes based on service and internal realities
  • Each block is replicated to more than one geographic region as a disaster resilience move, generally at least 2, often 3 or more

The end result of the above is that the 1MB vacation picture is written to disk 6 to 14 different times. The nice thing about the above is you can lose an entire rack-row of a datacenter and not lose data; you might lose 2 of your 5 copies of a given block, but you have 3 left to rebuild, and your other region still has full copies.

But I mentioned this 1MB file has been kept online for 14 years. Assuming an average disk life-span of 5 years, each block has been migrated to new hardware 3 times in those years. Meaning each 4KB block of that file has been resident on between 24 and 42 hardrives; or more, if your provider replicates to more than 2 discrete geographic region. Those drives have been spinning and using power (and therefore requiring cooling) the entire time.

These systems need to go to all of this effort because they need to be sure that all files are available all the time, when you need it, where you need it, as fast as possible. If a person in that vacation photo retires, and you suddenly need that picture for the Retirement Montage at their going away party, you don't want to wait hours for it to come off tape. You want it now.

Contrast this to a scientific dataset. Once the data has stopped being used for Science! it can safely be archived until someone else needs to use it. This is the use-case behind AWS S3 Glacier: you pay a lot less for storing data, so long as you're willing to accept delays measurable in hours before you can access it. This is also the use-case where tape shines.

A lab gets done chewing on a dataset sized at 100TB, which is pretty chonky for 2011. They send it to cold storage. Their IT section dutifully copies the 100TB dataset onto LTO-5 drives at 1.5TB per tape, for a stack of 67 tapes, and removes the dataset from their disk-based storage arrays.

Time passes, as with the Dropbox-style data. LTO drives can read between 1 and 2 generations prior. Assuming the lab IT section keeps up on tape technology, it would be the advent of LTO-7 in 2015 that would prompt a great restore and rearchive effort of all LTO-5 and previous media. LTO-7 can do 6TB per tape, for a much smaller stack of 17 tapes.

LTO-8 changed this, with only a one version lookback. So when LTO-8 comes out in 2017 with a 9TB capacity, a read restore/rearchive effort runs again, changing our stack of tapes from 17 to 12. LTO-9 comes out in 2021 with 18TB per tape, and that stack reduces to 6 tapes to hold 100TB.

All in all, our cold dataset had to relocate to new media three times, same as the disk-based stuff. However, keeping stacks of tape in a climate controlled room is vastly cheaper than a room of powered, spinning disk. The actual reality is somewhat different, as the few data archive people I know mention they do great restore/archive runs about every 8 to 10 years, largely driven by changes in drive connectivity (SCSI, SATA, FibreChannel, Infiniband, SAS, etc), OS and software support, and corporate purchasing cycles. Keeping old drives around for as long as possible is fiscally smart, so the true recopy events for our example data is likely "1".

So another lab wants to use that dataset and puts in a request. A day later, the data is on a disk-array for usage. Done. Carrying costs for that data in the intervening 14 years are significantly lower than the always available model of S3 and Dropbox.

Tape: still quite useful in the right contexts.

Read the whole story
Flameeyes
60 days ago
reply
London, Europe
Share this story
Delete

No, VAT isn’t a tariff – here’s what Trump (and others) get wrong

1 Share
Today’s episode of Untaxing is about Jaffa cakes and VAT. With the benefit of hindsight, that’s a very small VAT issue on a day when VAT has become a very large geopolitical issue. Donald Trump is considering applying tariffs to much of the world because of VAT. He believes it’s a tariff. You’ll be unsurprised […]

Source

Read the whole story
Flameeyes
63 days ago
reply
London, Europe
Share this story
Delete

Talk is cheap

1 Share

(Like this article? Read more Wednesday Wisdom! No time to read? No worries! This article will also become available as a podcast on Thursday)

On another note: In honor of my darling wife’s birthday and our ten year anniversary, I have minted her her own crypto token: The Li$a. Want to become a Li$a millionaire? Send me your wallet address and I will send you a cool million.

Everywhere I go, there are things that are suboptimal. That’s not unexpected; the universe is a cold, dark and lonely place that was clearly not created for either our comfort or our convenience. Accompanying these suboptimalities are people who can explain in detail what is wrong with the world; newspapers and social media are full of them. But does all that talk help?

The people who are explaining all the world’s problems seem to operate under the assumption that people either do not know that the problem exists or that they don’t care. They are usually wrong about the first thing, but usually right about the second. However, in my experience, just explaining the problem does not help one iota.

Every morning, I wake up and read a high quality Dutch newspaper. And every morning, without fail, it is full of stories that describe something that is wrong with the world. Here is a selection of today’s picks: American scientists are not allowed to join an IPCC panel on climate change, one of our dumber ministers (and that is saying something these days), called Zelensky “not democratically elected”, our lame duck prime minister is powerless in Europe, female athletes who complain about harassment from their coaches are not listened to, kids spend too much time on TikTok, and one of our public broadcasting societies (VPRO, of which I used to be a member) is an inward focused mess. Here is what is quite rare though: Any talk about solutions. It is usually just a whole lot of complaining and then sometimes a vague call to action: Someone should do something! But what exactly should be done? And by whom?

Let’s take an example.

In Europe there is considerable worry about dependence on American firms for cloud-based services, especially by governments and government agencies. There are newspaper articles about this problem, blogs, policy briefs, LinkedIn posts, and even questions in parliament. And that is usually where it ends. There are big unrealistic vague calls to action and then the next thing we read is that the Dutch Internet domain registrar is planning to move a big part of their operation to AWS.

This problem is of course massive. First of all, there is no European cloud alternative that can provide a level of service that is anywhere near what Microsoft, Amazon, and Google can offer. Having worked at or with all three of these cloud providers, I can say with some confidence that suggestions that any current European provider can offer a competitive service are simply laughable. But, it is not physically impossible. Europe has the money and it definitely has the talent. Sure, it doesn’t help that some of their greatest talents have moved to the US or work for the Americans, but that can probably be overcome. It is a gargantuan task though and it is not at all clear that there exists enough willpower to solve it. I mean, after more than a decade of signals that it is really time to get more self-sufficient in defense and energy, the Europeans were still caught like deer in the headlights when Russia invaded Ukraine and more recently when Donald Trump got elected and immediately started rolling out his maffia-like “NATO as a protection racket” policies. If we cannot fix things that are that important, will we really get our act together when it comes to, I don’t know, running our own virtual machines?

And thus, after many articles that explain how really big and really pressing and really important the problem is, nothing happens. As we say in Holland: Everyone takes a leak and things continue as they were.

```

This sounds better in Dutch because it rhymes: Iedereen doet een plas en alles is weer zoals het was.

You can see this same lack of tangible action at work on much smaller scales too. Many teams that I joined in the past ran complicated infrastructures where there are lots of things going wrong. Things are on fire, customers complain, and the pager rings off the hook. In situations like this there is no shortage of people who can explain what the problem is. But, like with the big problems, everyone takes a leak, and things continue as they were.

I get it though. All of these problems are relatively huge, often complicated, without obvious solutions, and mostly not urgent, where “urgent” is defined as: Our comfortable existence will be upended tomorrow if we don’t fix this today. But because of this, we are stuck a bit in a loop of problems that are complained about but not solved unless they become so urgent that we need to drop everything and declare a code red, because eventually our comfortable existence will actually be under threat of ending on short notice.

The root cause of the paralysis that often follows the explanation of a huge problem is that these explanations are not accompanied by reasonable calls to action. For example: People worried about the dominance of US cloud providers tell everyone that our government should stop using Microsoft ‘s cloud solutions. That is quite simply not possible on short notice and it is not even possible to determine where to begin. The people who say it is possible clearly have no experience providing IT-solutions to large bureaucracies. The annals of IT history are littered with projects on a much smaller scale gone horribly wrong. Did anyone say Horizon? Or KEI? In a similar vein, people who say that a team should stop everything they are doing and attack some problem are equally unrealistic. The team has lots to do this quarter and pretty much no annoying and urgent problem is so annoying or so urgent that the team can afford to drop everything and refocus.

So, what to do?

First all, realize that problems that take decades to build up cannot be solved in years. Similarly, problems that were years in the making, cannot be solved in weeks. If it took decades to get to where we are, it will take decades to solve. That doesn’t mean we shouldn’t start today, but it does mean that we should be realistic in our goals for the short term.

Next: Most people truly underestimate the effects of work compounding over time. Much like compound interest, doing something and then keeping at it really leads to amazing results even after a relatively short amount of time. That’s why I advise teams that are in fire fighting mode not to drop everything, but instead to commit to spending a manageable amount of their time to solving something, anything, but also to keep at that quarter after quarter. Even if you consistently spend only 10-15% of your time solving the things that you can solve, that work really adds up.

There are other advantages of this approach too: By setting your goals at an attainable level, you increase people’s belief that this is something that can and eventually will be solved. That’s the kind of positivity that begets more positivity. You are not only solving things, you are also creating confidence that this is a problem that can and will be solved. So when the time comes for bigger or more painful investments, you have some work that you can point to that will inspire support for your program. Nobody is going to spend oodles of money on an ambitious project, but people will be ready to fund the next phase of a project that has been going on for a while and that has been delivering results.

For the European dependency on US cloud providers I suggest a similar approach. Getting rid of all US clouds for all services and all government agencies is of course an impossible project. So here is my suggestion: Start with email. Make a principled decision that you are going to switch all government email to a newly built European service. That’s not a comprehensive solution by any means, but it is a start, and one that I think is doable. It will probably take ten years or so, but it is something that is attainable. For all its importance as the backbone of government information exchange, email is mostly a solved problem. The technologies are well understood, there is ample software available, and wherever we are going in the future, email will probably play a part. Remember: The whole Google cloud started with GMail! So why not do the same.?Don’t get me wrong, it will still be a massive project, but if we can’t even do that, there is nothing left but total and utter despair 🙂.

So, don’t just complain about the status quo. Talk is cheap, always offer solutions too. They don’t have to be all encompassing and solve every possible problem, but propose something and get going.

Something tangible that you can do today is subscribe to Wednesday Wisdom. It is free!





Download audio: https://api.substack.com/feed/podcast/157829048/254f4be9c6d776da4683cc363a343725.mp3
Read the whole story
Flameeyes
100 days ago
reply
London, Europe
Share this story
Delete

Pike is wrong on bloat

1 Share

This is my response to Rob Pike’s words On Bloat.

I’m not surprised to see this from Pike. He’s a NIH extremist. And yes, in this aspect he’s my spirit animal when coding for fun. I’ll avoid using a framework or a dependency because it’s not the way that I would have done it, and it doesn’t do it quite right… for me.

And he correctly recognizes the technical debt that an added dependency involves.

But I would say that he has two big blind spots.

  1. He doesn’t recognize that not using the available dependency is also adding huge technical debt. Every line of code you write is code that you have to maintain, forever.

  2. The option for most software isn’t “use the dependency” vs “implement it yourself”. It’s “use the dependency” vs “don’t do it at all”. If the latter means adding 10 human years to the product, then most of the time the trade-off makes it not worth doing at all.

He shows a dependency graph of Kubernetes. Great. So are you going to write your own Kubernetes now?

Pike is a good enough coder that he can write his own editor (wikipedia: “Pike has written many text editors”). So am I. I don’t need dependencies to satisfy my own requirements.

But it’s quite different if you need to make a website that suddenly needs ADA support, and now the EU forces a certain cookie behavior, and designers (in collaboration with lawyers) mandate a certain layout of the cookie consent screen, and the third party ad network requires some integration.

What are you going to do? Demand funding for 100 SWE years to implement it yourself? And in the mean time, just not be able to advertise during BFCM? Not launch the product for 10 years? Just live with the fact that no customer can reach your site if they use Opera on mobile?

I feel like Pike is saying “yours is the slowest website that I ever regularly use”, to which the answer is “yeah, but you do use it regularly”. If the site hadn’t launched, then you wouldn’t be able to even choose to use it.

And comparing to the 70s. Please. Come on. If you ask a “modern coder” to solve a “1970s problem”, it’s not going to be slow, is it? They could write it in Python and it wouldn’t even be a remotely fair fight.

Software is slower today not because the problems are more complex in terms of compute (yet they very very very much are), but because the compute capacity of today simply affords wasting it, in order that we are now able to solve complex problems.

People do things because there’s a perceived demand for it. If the demand is “I just like coding”, then as long as you keep coding there’s no failure.

Pike’s technical legacy has very visible scars from these blind spots of his.

Read the whole story
Flameeyes
111 days ago
reply
London, Europe
Share this story
Delete
Next Page of Stories