784 stories
·
8 followers

Fossil fuels are dead (and here's why)

1 Comment and 5 Shares

So, I'm going to talk about Elon Musk again, everybody's least favourite eccentric billionaire asshole and poster child for the Thomas Edison effect—get out in front of a bunch of faceless, hard-working engineers and wave that orchestra conductor's baton, while providing direction. Because I think he may be on course to become a multi-trillionaire—and it has nothing to do with cryptocurrency, NFTs, or colonizing Mars.

This we know: Musk has goals (some of them risible, some of them much more pragmatic), and within the limits of his world-view—I'm pretty sure he grew up reading the same right-wing near-future American SF yarns as me—he's fairly predictable. Reportedly he sat down some time around 2000 and made a list of the challenges facing humanity within his anticipated lifetime: roll out solar power, get cars off gasoline, colonize Mars, it's all there. Emperor of Mars is merely his most-publicized, most outrageous end goal. Everything then feeds into achieving the means to get there. But there are lots of sunk costs to pay for: getting to Mars ain't cheap, and he can't count on a government paying his bills (well, not every time). So each step needs to cover its costs.

What will pay for Starship, the mammoth actually-getting-ready-to-fly vehicle that was originally called the "Mars Colony Transporter"?

Starship is gargantuan. Fully fuelled on the pad it will weigh 5000 tons. In fully reusable mode it can put 100-150 tons of cargo into orbit—significantly more than a Saturn V or an Energiya, previously the largest launchers ever built. In expendable mode it can lift 250 tons, more than half the mass of the ISS, which was assembled over 20 years from a seemingly endless series of launches of 10-20 ton modules.

Seemingly even crazier, the Starship system is designed for one hour flight turnaround times, comparable to a refueling stop for a long-haul airliner. The mechazilla tower designed to catch descending stages in the last moments of flight and re-stack them on the pad is quite without precedent in the space sector, and yet they're prototyping the thing. Why would you even do that? Well,it makes no sense if you're still thinking of this in traditional space launch terms, so let's stop doing that. Instead it seems to me that SpaceX are trying to achieve something unprecedented with Starship. If it works ...

There are no commercial payloads that require a launcher in the 100 ton class, and precious few science missions. Currently the only clear-cut mission is Starship HLS, which NASA are drooling for—a derivative of Starship optimized for transporting cargo and crew to the Moon. (It loses the aerodynamic fins and the heat shield, because it's not coming back to Earth: it gets other modifications to turn it into a Moon truck with a payload in the 100-200 ton range, which is what you need if you're serious about running a Moon base on the scale of McMurdo station.)

Musk has trailed using early Starship flights to lift Starlink clusters—upgrading from the 60 satellites a Falcon 9 can deliver to something over 200 in one shot. But that's a very limited market.

So what could pay for Starship, and furthermore require a launch vehicle on that scale, and demand as many flights as Falcon 9 got from Starlink?

Well, let's look at the way Starlink synergizes with Musk's other businesses. (Bear in mind it's still in the beta-test stage of roll-out.) Obviously cheap wireless internet with low latency everywhere is a desirable goal: people will pay for it. But it's not obvious that enough people can afford a Starlink terminal for themselves. What's paying for Starlink? As Robert X. Cringely points out, Starlink is subsidized by the FCC—cablecos like Comcast can hand Starlink terminals to customers in remote areas in order to meet rural broadband service obligations that enable them to claim huge subsidies from the FCC: in return they get to milk the wallets of their much easier-to-reach urban/suburban customers. This covers the roll-out cost of Starlink, before Musk starts marketing it outside the USA.

So. What kind of vertically integrated business synergy could Musk be planning to exploit to cover the roll-out costs of Starship?

Musk owns Tesla Energy. And I think he's going to turn a profit on Starship by using it to launch Space based solar power satellites. By my back of the envelope calculation, a Starship can put roughly 5-10MW of space-rate photovoltaic cells into orbit in one shot. ROSA—Roll Out Solar Arrays now installed on the ISS are ridiculously light by historic standards, and flexible: they can be rolled up for launch, then unrolled on orbit. Current ROSA panels have a mass of 325kg and three pairs provide 120kW of power to the ISS: 2 tonnes for 120KW suggests that a 100 tonne Starship payload could produce 6MW using current generation panels, and I suspect a lot of that weight is structural overhead. The PV material used in ROSA reportedly weighs a mere 50 grams per square metre, comparable to lightweight laser printer paper, so a payload of pure PV material could have an area of up to 20 million square metres. At 100 watts of usable sunlight per square metre at Earth's orbit, that translates to 2GW. So Starship is definitely getting into the payload ball-park we'd need to make orbital SBSP stations practical. 1970s proposals foundered on the costs of the Space Shuttle, which was billed as offering $300/lb launch costs (a sad and pathetic joke), but Musk is selling Starship as a $2M/launch system, which works out at $20/kg.

So: disruptive launch system meets disruptive power technology, and if Tesla Energy isn't currently brainstorming how to build lightweight space-rated PV sheeting in gigawatt-up quantities I'll eat my hat.

Musk isn't the only person in this business. China is planning a 1 megawatt pilot orbital power station for 2030, increasing capacity to 1GW by 2049. Entirely coincidentally, I'm sure, the giant Long March 9 heavy launcher is due for test flights in 2030: ostensibly to support a Chinese crewed Lunar expedition, but I'm sure if you're going to build SBSP stations in bulk and the USA refuses to cooperate with you in space, having your own Starship clone would be handy.

I suspect if Musk uses Tesla Energy to push SBPS (launched via Starship) he will find a way to use his massive PV capacity to sell carbon offsets to his competitors. (Starship is designed to run on a fuel cycle that uses synthetic fuels—essential for Mars—that can be manufactured from carbon dioxide and water, if you add enough sunlight. Right now it burns fossil methane, but an early demonstration of the capability of SBPS would be using it to generate renewable fuel for its own launch system.)

Globally, we use roughly 18TW of power on a 24x7 basis. SBPS's big promise is that, unlike ground-based solar, the PV panels are in constant sunlight: there's no night when you're far enough out from the planetary surface. So it can provide base load power, just like nuclear or coal, only without the carbon emissions or long-lived waste products.

Assuming a roughly 70% transmission loss from orbit (beaming power by microwave to rectenna farms on Earth is inherently lossy) we would need roughly 60TW of PV panels in space. Which is 60,000 GW of panels, at roughly 1 km^2 per GW. With maximum optimism that looks like somewhere in the range of 3000-60,000 Starship launches, at $2M/flight is $6Bn to $120Bn ... which, over a period of years to decades, is chicken feed compared to the profit to be made by disrupting the 95% of the fossil fuel industry that just burns the stuff for energy. The cost of manufacturing the PV cells is another matter, but again: ground-based solar is already cheaper to install than shoveling coal into existing power stations, and in orbit it produces four times as much electricity per unit area.

Is Musk going to become a trillionaire? I don't know. He may fall flat on his face: he may not pick up the gold brick that his synergized businesses have placed at his feet: any number of other things could go wrong. I find the fact that other groups—notably the Chinese government—are also going this way, albeit much more slowly and timidly than I'm suggesting, is interesting. But even if Musk doesn't go there, someone is going to get SBPS working by 2030-2040, and in 2060 people will be scratching their heads and wondering why we ever bothered burning all that oil. But most likely Musk has noticed that this is a scheme that would make him unearthly shitpiles of money (the global energy sector in 2014 had revenue of $8Tn) and demand the thousands of Starship flights it will take to turn reusable orbital heavy lift into the sort of industry in its own right that it needs to be before you can start talking about building a city on Mars.

Exponentials, as COVID19 has reminded us, have an eerie quality to them. I think a 1MW SBPS by 2030 is highly likely, if not inevitable, given Starship's lift capacity. But we won't have a 1GW SBPS by 2049: we'll blow through that target by 2035, have a 1TW cluster that lights up the night sky by 2040, and by 2050 we may have ended use of non-synthetic fossil fuels.

If this sounds far-fetched, remember that back in 2011, SpaceX was a young upstart launch company. In 2010 they began flying Dragon capsule test articles: in 2011 they started experimenting with soft-landing first stage boosters. In the decade since then, they've grabbed 50% of the planetary launch market, launched the world's largest comsat cluster (still expanding), begun flying astronauts to the ISS for NASA, and demonstrated reliable soft-landing and re-flight of boosters. They're very close to overtaking the Space Shuttle in terms of reusability: no shuttle flew more than 30 times and SpaceX lately announced that their 10 flight target for Falcon 9 was just a goalpost (which they've already passed). If you look at their past decade, then a forward projection gets you more of the same, on a vastly larger scale, as I've described.

Who loses?

Well, there will be light pollution and the ground-based astronomers will be spitting blood. But in a choice between "keep the astronomers happy" and "climate oopsie, we all die", the astronomers lose. Most likely the existence of $20/kg launch systems will facilitate a new era of space-based astronomy: this is the wrong decade to be raising funds to build something like ELT, only bigger.

Read the whole story
Flameeyes
17 days ago
reply
London, Europe
Share this story
Delete
1 public comment
LeMadChef
17 days ago
reply
Surprising optimism from Charlie here.
Denver, CO

Don’t preorder ebooks from Packt Publishing

1 Share

Two months, I preordered an interesting-looking ebook title from Packt Publishing. Neither the post-purchase experience nor the final product lived up to my expectations.

Read more …



Read the whole story
Flameeyes
62 days ago
reply
London, Europe
Share this story
Delete

Testing cargo deliveries by rail to town centres

1 Share

A scheme to use converted passenger trains to deliver freight into the centre of towns was tested last week using Euston station.

Railways used to earn as much, if not at times more money from carrying freight than they did from carrying humans, but the rise of refrigeration killed off the need for last-minute deliveries into town centres, and the growth in road freight killed of most of the rest. Rail fright today, while still substantial, is mainly for aggregates and cargo containers.

Much of the goods shipped to UK retailers today though comes from regional hubs and delivered by van to the final destination – and this is where the trial of using passenger railways comes in.

A converted Class 319 passenger train can carry freight in the same “roll cages” often used by lorries delivering the retailers, and bring it right into the centre of cities, where the trains park on normal passenger platforms and are unloaded.

These could either be taken to stores by a local van, reducing the mileage on the road, or if carrying goods sold online, the parcels could be unpacked locally and then delivered by bike to their final destination.

As well as retail, the freight operation could transport other light goods needed rapidly by businesses.

Last Wednesday, Network Rail and distribution firm Orion showed how the concept works at Euston station.

In addition to reducing road traffic on the main roads between the warehouse and town centre, the trains can travel up to 100mph – twice the average speed as road traffic.

Some of the UK’s largest parcel carriers have expressed interest in using the new high-speed logistics service using the converted trains. The first will start running later this year between the Midlands and Scotland. More routes could be added in 2022 dependent on customer need and available train paths.

The converted train can operate in 4/8/12 carriage formations, and each carriage carries the same amount of cargo as an articulated lorry.

This article was published on ianVisits

SUPPORT THIS WEBSITE

This website has been running now for just over a decade, and while advertising revenue contributes to funding the website, but doesn't cover the costs. That is why I have set up a facility with DonorBox where you can contribute to the costs of the website and time invested in writing and research for the news articles.

It's very similar to the way The Guardian and many smaller websites are now seeking to generate an income in the face of rising costs and declining advertising.

Whether its a one-off donation or a regular giver, every additional support goes a long way to covering the running costs of this website, and keeping you regularly topped up doses of Londony news and facts.

If you like what your read on here, then please support the website here.

Thank you

Read the whole story
Flameeyes
67 days ago
reply
London, Europe
Share this story
Delete

How a Docker footgun led to a vandal deleting NewsBlur’s MongoDB database

6 Comments and 13 Shares

tl;dr: A vandal deleted NewsBlur’s MongoDB database during a migration. No data was stolen or lost.

I’m in the process of moving everything on NewsBlur over to Docker containers in prep for a big redesign launching next week. It’s been a great year of maintenance and I’ve enjoyed the fruits of Ansible + Docker for NewsBlur’s 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models). The day was wrapping up and I settled into a new book on how to tame the machines once they’re smarter than us when I received a strange NewsBlur error on my phone.

"query killed during yield: renamed collection 'newsblur.feed_icons' to 'newsblur.system.drop.1624498448i220t-1.feed_icons'"

There is honestly no set of words in that error message that I ever want to see again. What is drop doing in that error message? Better go find out.

Logging into the MongoDB machine to check out what state the DB is in and I come across the following…

nbset:PRIMARY> show dbs
READ__ME_TO_RECOVER_YOUR_DATA   0.000GB
newsblur                        0.718GB

nbset:PRIMARY> use READ__ME_TO_RECOVER_YOUR_DATA
switched to db READ__ME_TO_RECOVER_YOUR_DATA
    
nbset:PRIMARY> db.README.find()
{ 
    "_id" : ObjectId("60d3e112ac48d82047aab95d"), 
    "content" : "All your data is a backed up. You must pay 0.03 BTC to XXXXXXFTHISGUYXXXXXXX 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: FTHISGUY@recoverme.one and you will receive a link to download your database dump." 
}

Two thoughts immediately occured:

  1. Thank goodness I have some recently checked backups on hand
  2. No way they have that data without me noticing

Three and a half hours before this happened, I switched the MongoDB cluster over to the new servers. When I did that, I shut down the original primary in order to delete it in a few days when all was well. And thank goodness I did that as it came in handy a few hours later. Knowing this, I realized that the hacker could not have taken all that data in so little time.

With that in mind, I’d like to answer a few questions about what happened here.

  1. Was any data leaked during the hack? How do you know?
  2. How did NewsBlur’s MongoDB server get hacked?
  3. What will happen to ensure this doesn’t happen again?

Let’s start by talking about the most important question of all which is what happened to your data.

1. Was any data leaked during the hack? How do you know?

I can definitively write that no data was leaked during the hack. I know this because of two different sets of logs showing that the automated attacker only issued deletion commands and did not transfer any data off of the MongoDB server.

Below is a snapshot of the bandwidth of the db-mongo1 machine over 24 hours:

You can imagine the stress I experienced in the forty minutes between 9:35p, when the hack began, and 10:15p, when the fresh backup snapshot was identified and put into gear. Let’s breakdown each moment:

  1. 6:10p: The new db-mongo1 server was put into rotation as the MongoDB primary server. This machine was the first of the new, soon-to-be private cloud.
  2. 9:35p: Three hours later an automated hacking attempt opened a connection to the db-mongo1 server and immediately dropped the database. Downtime ensued.
  3. 10:15p: Before the former primary server could be placed into rotation, a snapshot of the server was made to ensure the backup would not delete itself upon reconnection. This cost a few hours of downtime, but saved nearly 18 hours of a day’s data by not forcing me to go into the daily backup archive.
  4. 3:00a: Snapshot completes, replication from original primary server to new db-mongo1 begins. What you see in the next hour and a half is what the transfer of the DB looks like in terms of bandwidth.
  5. 4:30a: Replication, which is inbound from the old primary server, completes, and now replication begins outbound on the new secondaries. NewsBlur is now back up.

The most important bit of information the above chart shows us is what a full database transfer looks like in terms of bandwidth. From 6p to 9:30p, the amount of data was the expected amount from a working primary server with multiple secondaries syncing to it. At 3a, you’ll see an enormous amount of data transfered.

This tells us that the hacker was an automated digital vandal rather than a concerted hacking attempt. And if we were to pay the ransom, it wouldn’t do anything because the vandals don’t have the data and have nothing to release.

We can also reason that the vandal was not able to access any files that were on the server outside of MongoDB due to using a recent version of MongoDB in a Docker container. Unless the attacker had access to a 0-day to both MongoDB and Docker, it is highly unlikely they were able to break out of the MongoDB server connection.

While the server was being snapshot, I used that time to figure out how the hacker got in.

2. How did NewsBlur’s MongoDB server get hacked?

Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn’t work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world. So while my firewall was “active”, doing a sudo iptables -L | grep 27017 showed that MongoDB was open the world. This has been a Docker footgun since 2014.

To be honest, I’m a bit surprised it took over 3 hours from when I flipped the switch to when a hacker/vandal dropped NewsBlur’s MongoDB collections and pretended to ransom about 250GB of data. This is the work of an automated hack and one that I was prepared for. NewsBlur was back online a few hours later once the backups were restored and the Docker-made hole was patched.

It would make for a much more dramatic read if I was hit through a vulnerability in Docker instead of a footgun. By having Docker silently override the firewall, Docker has made it easier for developers who want to open up ports on their containers at the expense of security. Better would be for Docker to issue a warning when it detects that the most popular firewall on Linux is active and filtering traffic to a port that Docker is about to open.

The second reason we know that no data was taken comes from looking through the MongoDB access logs. With these rich and verbose logging sources we can invoke a pretty neat command to find everybody who is not one of the 100 known NewsBlur machines that has accessed MongoDB.


$ cat /var/log/mongodb/mongod.log | egrep -v "159.65.XX.XX|161.89.XX.XX|<< SNIP: A hundred more servers >>"

2021-06-24T01:33:45.531+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26003 #63455699 (1189 connections now open)
2021-06-24T01:33:45.635+0000 I NETWORK  [conn63455699] received client metadata from 171.25.193.78:26003 conn63455699: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.010+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26557 #63455724 (1189 connections now open)
2021-06-24T01:33:46.092+0000 I NETWORK  [conn63455724] received client metadata from 171.25.193.78:26557 conn63455724: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.500+0000 I NETWORK  [conn63455724] end connection 171.25.193.78:26557 (1198 connections now open)
2021-06-24T01:33:46.533+0000 I NETWORK  [conn63455699] end connection 171.25.193.78:26003 (1200 connections now open)
2021-06-24T01:34:06.533+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:10056 #63456621 (1266 connections now open)
2021-06-24T01:34:06.627+0000 I NETWORK  [conn63456621] received client metadata from 185.220.101.6:10056 conn63456621: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:06.890+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:21642 #63456637 (1264 connections now open)
2021-06-24T01:34:06.962+0000 I NETWORK  [conn63456637] received client metadata from 185.220.101.6:21642 conn63456637: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - starting
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping 1 collections
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping collection: config.transactions
2021-06-24T01:34:08.020+0000 I STORAGE  [conn63456637] dropCollection: config.transactions (no UUID) - renaming to drop-pending collection: config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 }
2021-06-24T01:34:08.029+0000 I REPL     [replication-14545] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 1), t: -1 })
2021-06-24T01:34:08.030+0000 I STORAGE  [replication-14545] Finishing collection drop for config.system.drop.1624498448i1t-1.transactions (no UUID).
2021-06-24T01:34:08.030+0000 I COMMAND  [conn63456637] dropDatabase config - successfully dropped 1 collections (most recent drop optime: { ts: Timestamp(1624498448, 1), t: -1 }) after 7ms. dropping database
2021-06-24T01:34:08.032+0000 I REPL     [replication-14546] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 5), t: -1 })
2021-06-24T01:34:08.041+0000 I COMMAND  [conn63456637] dropDatabase config - finished
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - starting
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - dropping 37 collections

<< SNIP: It goes on for a while... >>

2021-06-24T01:35:18.840+0000 I COMMAND  [conn63456637] dropDatabase newsblur - finished

The above is a lot, but the important bit of information to take from it is that by using a subtractive filter, capturing everything that doesn’t match a known IP, I was able to find the two connections that were made a few seconds apart. Both connections from these unknown IPs occured only moments before the database-wide deletion. By following the connection ID, it became easy to see the hacker come into the server only to delete it seconds later.

Interestingly, when I visited the IP address of the two connections above, I found a Tor exit router:

This means that it is virtually impossible to track down who is responsible due to the anonymity-preserving quality of Tor exit routers. Tor exit nodes have poor reputations due to the havoc they wreak. Site owners are split on whether to block Tor entirely, but some see the value of allowing anonymous traffic to hit their servers. In NewsBlur’s case, because NewsBlur is a home of free speech, allowing users in countries with censored news outlets to bypass restrictions and get access to the world at large, the continuing risk of supporting anonymous Internet traffic is worth the cost.

3. What will happen to ensure this doesn’t happen again?

Of course, being in support of free speech and providing enhanced ways to access speech comes at a cost. So for NewsBlur to continue serving traffic to all of its worldwide readers, several changes have to be made.

The first change is the one that, ironically, we were in the process of moving to. A VPC, a virtual private cloud, keeps critical servers only accessible from others servers in a private network. But in moving to a private network, I need to migrate all of the data off of the publicly accessible machines. And this was the first step in that process.

The second change is to use database user authentication on all of the databases. We had been relying on the firewall to provide protection against threats, but when the firewall silently failed, we were left exposed. Now who’s to say that this would have been caught if the firewall failed but authentication was in place. I suspect the password needs to be long enough to not be brute-forced, because eventually, knowing that an open but password protected DB is there, it could very possibly end up on a list.

Lastly, a change needs to be made as to which database users have permission to drop the database. Most database users only need read and write privileges. The ideal would be a localhost-only user being allowed to perform potentially destructive actions. If a rogue database user starts deleting stories, it would get noticed a whole lot faster than a database being dropped all at once.

But each of these is only one piece of a defense strategy. As this well-attended Hacker News thread from the day of the hack made clear, a proper defense strategy can never rely on only one well-setup layer. And for NewsBlur that layer was a allowlist-only firewall that worked perfectly up until it didn’t.

As usual the real heros are backups. Regular, well-tested backups are a necessary component to any web service. And with that, I’ll prepare to launch the big NewsBlur redesign later this week.

Read the whole story
Flameeyes
83 days ago
reply
London, Europe
Share this story
Delete
6 public comments
seriousben
82 days ago
reply
Great root cause analysis of a security incident.
Canada
chrisrosa
83 days ago
reply
Great write up Samuel. And kudos for your swift and effective response.
San Francisco, CA
jshoq
83 days ago
reply
This is a great account on how to recover a service from a major outage. In this case, NewsBlur was attacked by a scripter that used a well known hole to attack the system. In this case, a well planned and validated backup setup helped NewsBlur to get their service back online quickly. This is a great read of a blameless post mortem executed well.
JS
Seattle, WA
jqlive
84 days ago
reply
Thanks for the write up, it was interesting to read and very transparent of you. It would be an interesting read to know how you'll be applying ML Models to Newsblur.
CN/MX
samuel
84 days ago
reply
What a week. In other news, new blog design launched!
Cambridge, Massachusetts
deezil
83 days ago
Thanks for being above-board with all this! The HackerNews comment section was a little brutal towards you about some things, but I like that you've been transparent about everything.
samuel
83 days ago
HN only knows how to be brutal, which I always appreciate.
acdha
82 days ago
Thanks for writing this up. That foot-gun really needs fixing.
BLueSS
84 days ago
reply
Thanks, Samuel, for your hard work and efforts keeping NewsBlur alive!

Fixing the News Media and Digital Platform Bargaining Code

1 Share

The News Media and Digital Platforms Mandatory Bargaining Code (NMDPMBC) of Australia made headlines earlier this year. The Australian government’s new legislation hopes to “[address] bargaining power imbalances between digital platforms and [news businesses.]” It does this by imposing that all digital platforms must pay news organizations for any use of its news products. Including but not limited to merely linking to news.

Read more …



Read the whole story
Flameeyes
110 days ago
reply
London, Europe
Share this story
Delete

After the Pandemic

3 Shares
I'm looking forward to having to worry a lot less about covid, but wouldn't mind if we worried a little more about giving each other colds. Colds are bad!
Read the whole story
Flameeyes
142 days ago
reply
London, Europe
Share this story
Delete
Next Page of Stories