806 stories
·
8 followers

Plans to open a disused railway as the “Camden Highline” are approved

1 Share

Plans to convert a disused railway in North London into an elevated walkway for pedestrians have been given the go-ahead after Camden Council granted planning permission for the first section of the highline walkway.

Current disused railway (c) Camden Highline

The project will see the transformation of a section of the disused railway into a new elevated urban park that’ll be open to the public to stroll along, akin to New York’s High Line.

Design concept (c) JCFO

Section one, from Camden Gardens, will be reached via a lift or stairs that take you through a tree canopy onto a gantry, offering unique views over the park and of the Victorian railway viaduct. The second phase, to come later will link the Highline to the eastern edge of Agar Estates, and the third phase will take it to Maiden Lane Estates.

The access points to get up to the park, at Camden Gardens, Royal College Street, Camley Street, and York Way, will be fully accessible, with a potential fifth additional stair at St Pancras Way.

The project, which runs alongside the London Overground railway, is set to be built in three sections, beginning at Camden Gardens to Royal College Street, then east to Camley Street, and finally to York Way.

Now that planning permission has been granted for the first stage, as well as keeping up fundraising momentum, the Camden Highline charity is looking for major donors to come on board to support with the £14m cost of the first section of the project, and get construction work underway.

CEO, Simon Pitkeathley of Camden Highline comments: “To go from a Google Earth printout, sellotaped together on our table, to now a real designed thing with planning permission is amazing. I want to say a huge thanks to everyone who has come with us on this exciting journey, particularly the design team, who have done an incredible job, and all the donors who backed us through the riskiest stages of the project. We’re now shovel ready, but need your help to continue the momentum and raise the money to deliver this amazing park in the sky.”

It’s been previously estimated that the Camden Highline could open from 2025.

Route map (c) Camden Highline

This article was published on ianVisits

SUPPORT THIS WEBSITE

This website has been running now for just over a decade, and while advertising revenue contributes to funding the website, but doesn't cover the costs. That is why I have set up a facility with DonorBox where you can contribute to the costs of the website and time invested in writing and research for the news articles.

It's very similar to the way The Guardian and many smaller websites are now seeking to generate an income in the face of rising costs and declining advertising.

Whether its a one-off donation or a regular giver, every additional support goes a long way to covering the running costs of this website, and keeping you regularly topped up doses of Londony news and facts.

If you like what your read on here, then please support the website here.

Thank you

The post Plans to open a disused railway as the “Camden Highline” are approved appeared first on ianVisits.

Read the whole story
Flameeyes
10 days ago
reply
London, Europe
Share this story
Delete

Why don’t people just…?

1 Share

Bit of a rant here, so be warned… Caught two threads today with the general gist of “why don’t people just…” –specifically, why haven’t people...

The post Why don’t people just…? appeared first on Dissociated Press.

Read the whole story
Flameeyes
32 days ago
reply
London, Europe
Share this story
Delete

An Update On My Thoughts on AI-Generated Art

1 Share
John Scalzi

Tor, which is the publisher of my novels, is being called out for using AI-generated art on a book cover; it appears that they got it from a stock art house. Getting graphic elements from stock art to modify on covers is a common enough practice — including on my own most recent novel cover — but the fact stock art houses are now stocking up on AI-generated art (which they then sell, undercutting creators) is, to put it mildly, not great. It’s possible Tor didn’t know (or didn’t pay attention to) the fact the stock art was AI-generated, but that doesn’t make it better, it kind of makes it worse.

So, two things here:

1. I’ll be emphasizing to Tor (and other publishers) that I expect my covers to have art that is 100% human-derived, even if stock art elements are used;

2. For now I’m done with AI art in public settings. As much fun as it has been to play with, the fact it’s already migrating onto “Big Five” covers is troubling, and I think it’s more important to stand with and support visual artists than it is to show off things I’ve generated through prompts on social media.

I think there is probably a way to responsibly use and generate art with AI, which probably includes ways to make sure “training” is opt-in and compensated for, but we’re not there yet, and I’m okay waiting for some additional clarity before I start playing with it again in public.

— JS

Read the whole story
Flameeyes
52 days ago
reply
London, Europe
Share this story
Delete

A FreeStyle Libre Retrospective, Nearly Seven Years Later

1 Share

I have diabetes, not that it is a secret. As many other people with diabetes, I have been trained to prick my finger multiple times a day to test my blood sugar (glycemia) since I left the hospital. The finger-pricking routine involves drawing just enough capillary blood to load into a test strip (or cartridge) for it to give you an instantaneous reading. It’s a time-tested method but also one that has a lot more drawbacks than you may think.

First there is the obvious problem of pain: pricking the tip of your finger is painful — even more so if you work with your hands (including typing) and you end up using the tip of your finger all the time as well. There’s also the risk of permanent nerve damage, which is how long-term diabetics tend to not care about that particular pain anymore. To add insult to (literal) injury, diabetes affects platelets, making fingers bleed longer if you prick them too deep. To deal with these parameters, glucometers tend to distinguish themselves based on the amount of blood necessary for their test strip (less blood means you don’t need to prick very deeply), as well as based on the spring-loaded pricking needle device they come with (in the hospital I was an in-patient at, they used painful and deep spring-loaded single use devices that left my fingers literally livid after just a couple of days.)

If you never teste your blood sugar with this method, you may wonder why is it that critical to get the right amount of blood. The reason for it is that the test strips and cartridges usually only allow you a single attempt to put enough blood — and if you fail, an error is given, and the strip consumed. The strips themselves are way more expensive than the glucometer (in a razor and blades business model), and even in countries with a national healthcare service exists (I have been in care of SSN, HSE, and NHS in Italy, Ireland, and UK respectively) the amount of test strips you’re given is not unlimited — as an insulin-treated diabetic I think my allowance is around three or four strips a day. So another differentiation on glucometers become how fast you need to put the blood into the strip, and whether you’re allowed to have “a second go.”

If you wonder why the expectation is to take just a few datapoints during the day, one of the many reasons is that you’re looking at ballparks, not precise details. This may sound counterintuitive, but since I have been buying and reverse engineering glucometers as a matter of hobby for the past few years, I can tell you that taking measurements even from the same finger across multiple meters (even with the same strips, even with the same model of meter) can give you a ±10% result with no doubts! And since you’re not meant to repeatedly prick the same finger… the numbers are unlikely to be directly comparable between readings anyway.

I detailed all of this to explain why CGMs (Continuous Glucose Monitors) and the similar (but not quite the same) Flash Monitors such as FreeStyle Libre are such a change of pace from what existed before. And yes, I mixed these two concepts up when I originally got the FreeStyle Libre — there’s a very fundamental distinction. Both of these types of devices are generally composed of a sensor (that replaces the whole strip-and-blood dance) and a “recorder” (the meter itself), with many different options for the way the two communicate. For both types, the recorder may be a smartphone, talking either Bluetooth or NFC. But while CGMs will “push” their readings to the recorder continuously, flash meters require you to scan the sensor to download some amount of data. In the particular case of the FreeStyle Libre, the sensor reads the blood sugar (equivalent) every 15 minutes, and the reader or smartphone will download up to eight hours worth of samples.

The first generation FreeStyle Libre – which I bought in Ireland “smuggling” (not really) it across the border from the UK – was a simple flash monitor — it did absolutely nothing if you didn’t scan the sensor. The Libre 2 I’m using right now still is a flash monitor, but it includes a Bluetooth LE alert “return path” (but only for one smartphone and optionally one reader device) to alert if your blood sugar falls below or climbs higher than a set threshold. An even newer version called Libre 3 exists, and Daniel wrote about it, but it is not available in the UK at the time of writing — this is an actual CGM system, and I’m looking forward to try it.

Before digging into my personal experience, I also want to explain that “equivalent” I put there. Most glucometers are not actually taking a reading of your blood sugar. That is because what interests doctors is usually the equivalent of what a drawn blood test would report as your blood sugar. Which means the meters apply some type of scaling factor to give you a value that can be compared with your in-patient blood tests. Since most meters still deal with blood, this tends to be ignored much — but obviously all of the CGMs I know of, and the FreeStyle Libre, are not actually drawing blood and are not installed within veins or anything. Instead what happens is that these apply some calibration curves to the raw reading of their sensors to calculate a matching value.

This “virtual value” has quite the results. I have noticed two camps of people, among those I discussed this topic with: those who swear by the Libre’s accuracy and those who swear by Dexcom’s. I have yet to find anyone who liked both. In my particular case, Libre appears to match my experience a lot better than Dexcom, so I’m very happy with it, but I also learnt not to judge if others end up with different experience than mine.

So now that the very long preamble of how all of this works is out of the way, what’s my experience so far with the FreeStyle Libre (and the Libre 2)? Absolutely fantastic. Not just for the accuracy, which as I said matches my experience with normal blood-based glucometers, but in particular for the observability (I mean, it is the current buzzword) of the blood sugar trends.

The first takeaway for me was understanding just how different metabolism can be between different people. With the exception of recipes that remove all and every source of carbohydrate, I always found myself struggling when trying the most common solutions for diabetic-friendly meals, and “low glucose” recipes. I had generally a better luck with “low glycaemic index” recipes, but even that was hit and miss: any type of fruits-based recipe tended to make me spike a lot more than what should have been a higher amount of carbohydrates from pasta. Well, after I started monitoring more than just a couple of readings a day, I could confirm that indeed, I don’t take well to most of those. And I decided to stop worrying about it — I can have my pasta just fine, in moderation, and if I avoid nut and most rice based recipes, I don’t spike my blood sugar at all.

The clearest example was in the Google Dublin café. At some point in 2015 they were serving two types of mini bread rolls: a wholemeal one that was supposed to be good for you (according to the signs and marking), and some white ones with either sesame or poppy seeds. A single wholemeal roll (weighing less than 50g in my estimate) would see my sugar spike 5→11 within half an hour; I could have four of the white rolls with cheese (at about the same weight) and it would maybe take me 5→10 in a couple of hours. Could I have had a salad instead? Probably, but I wouldn’t have enjoyed it, and that would likely have gotten me to snack more the rest of the day, so I was pretty okay with the bread-and-cheese with occasional meat — when they wouldn’t decide to make the most boozy sauce they could think of.

Being able to fine-tune my choice of food is to this day a great thing: I choose to eat things that are enjoyable and are known to make my blood sugar behave, rather than having to feel like I’m sacrificing myself with food I don’t enjoy because it’s said to be good for me, and then still suffer for that choice later, as it didn’t work well for me. As an aside, I think I remember seeing on 23andme a link to a study that suggested that I would be more likely to tolerate a Mediterranean diet — I can’t find that anymore. But as it turns out, that appears to be the case: having my carbs come from pasta rather than potatoes or rice appear to help a lot with my blood sugar control!

A second takeaway is to be found in math. Because with the Libre I’m no longer looking at a single point in time, but rather at a curve, I can judge the derivatives of it — that is, whether it going up or down, and whether it is going fast or slow. When deciding when and what to eat, this actually prepares me a lot better than just knowing that my blood sugar is, for instance 5.8 mmol/L — is that 5.8 going down from lunch, and likely going to bring me close to low blood sugar before we hit the restaurant, or is that 5.8 going up from the mid-afternoon snack that I had because I already hit the low blood sugar, and it’s likely that it’ll hit 8 before by the time we order dinner?

It is true that if you learn to listen to your body you can guess whether the sugar is going up or down and whether it is close to hypo or hyper thresholds. And it is recommended for you to keep listening to your body that way so that you don’t end up waking up in the middle of the night with a hypoglycaemic event and no sugars at home (it happened to me – thankfully only once – when I was about to move out of Dublin.) But being able to learn those feelings by comparing the sensations with actual readings is quite useful in my opinion.

In a similar note, with COVID among us, having a way to check my blood sugar turned out very useful to quickly tell that something was off with me. Indeed even with the most recent “simple” flu I experienced, I could tell when I was about to have a bad day of fever by noting that my blood sugar remained in the ~12 range for the whole day, no matter whether I ate or not. I have experience the opposite as well, when I ended up eating something that my stomach didn’t enjoy and my sugar hovered around ~4.5 for hours even after having sweets and sugary drinks.

Now, it does not mean that this is an absolute slam-dunk. I already noted that this does not work for everybody, and I know a number of people that would much rather use the Dexcom — I hear y’all, you don’t have to tell me. The only thing I’m mildly surprised about is that there doesn’t seem to be a whole lot of research papers looking into the reason why this feels so polarized.

In addition to this, there’s an optimization risk. I know of people who got so fixated with having an optimal number that they lost track of what the number is meant to represent. As I said, the main way I make use of the Libre data is to know what I can eat that makes me the happiest, while maintaining healthy. This does not mean death to all carbs, or optimize to always be “in the zone.” But that does mean there’s a psychological attitude to using solutions like these, which I’m ill-placed to discuss.

Finally, there’s the biggest drawback of these solutions ever: they are expensive, at least by European expectations (remember that most people here don’t pay for insulin.) Since I’m diabetic I don’t pay VAT on the sensors, but even then they set me off more than £100 a month — each sensor lasts two weeks and they come down to around £50 each. In a couple of cases I even “thrown away” a day or two of the sensor’s lifetime by changing it early before a trip, to avoid having to take two “new in box” sensors on a trip. Why two? Because at least a few times this year the sensor applicator (which is effectively a spring-loaded needle to pierce your skin and land the sensing strip under it) failed to pierce my skin, leading to a wasted sensor.

Thankfully, Abbott does replace those failed sensors free of charge, and the same goes for the couple of sensors that failed in the middle of their lifespan — although that has not happened at all in the past couple of years. Similarly, I have not “lost” a sensor due to my own mistakes in a long time: earlier on I struggled finding a good placement for the sensor, with once or twice bumping it off while taking a shower, hitting the side of a door (yes), or during physical activities.

In the UK, NHS covers the cost of the sensors for only a few people, particularly those who are unable to keep their sugars under control easily, and those who couldn’t afford it. Since I’m in the privileged position of both being able to afford the sensors, and also have enough control of my blood sugar even without the sensors, I don’t qualify for those, but I think it is okay.

The biggest drawback for me personally? Probably the anxiety at not having the information available for a long time. I’m not at the point where I would get anxious about going out without tracking for a few hours (like in the article Daniel sent me when I started using the Libre), though I do tend to plan my going out around when the sensor needs to be changed, leaving an extra one hour buffer around just in case the sensor was to not initialize correctly.

Most recently, due to distractions between work and other things happening around me, I had forgotten to order the new sensors in time, and came down to change my sensor with a single box to spare. My fear was that it wouldn’t initialize and I would have found myself having to go back to five fingerpricks a day. It was all in my head anyway, the sensor initialized fine, and having ordered the sensors early on Friday morning, I had them before lunch on Saturday (actually, before getting out of bed, but both me and my wife were feeling under the weather so we stayed sleeping in.)

So at the end of the day, I have to say that I’m not going back to fingerpricks any time soon, and I believe that, putting aside the whole accuracy problem, this type of technology will eventually be considered the baseline of care for diabetes — and will literally save lives.

Final disclaimer: this post describe my personal experience and is obviously not medical advice. Consult your doctor before making treatment and medical decision. I also have a minimal financial incentive in Abbott’s as I have bought some of their stock years ago in awe of their achievement with the Libre.

Read the whole story
Flameeyes
55 days ago
reply
London, Europe
Share this story
Delete

Technical Decision Making

1 Share

Technical Decision Making



There’s absolutely no poverty of technical advice to be found these days, be it on social media or on blog posts or at technical conferences or in publications.

Tags:

via Pocket <a href="https://copyconstruct.medium.com/technical-decision-making-9b2817c18da4" rel="nofollow">https://copyconstruct.medium.com/technical-decision-making-9b2817c18da4</a>

November 25, 2022 at 10:14PM

Read the whole story
Flameeyes
67 days ago
reply
London, Europe
Share this story
Delete

No way to parse integers in C

1 Share

There are a few ways to attempt to parse a string into a number in the C standard library. They are ALL broken.

Leaving aside the wide character versions, and staying with long (skipping int, long long or intmax_t, these variants all having the same problem) there are three ways I can think of:

  1. atol()
  2. strtol() / strtoul()
  3. sscanf()

They are all broken.

What is the correct behavior, anyway?

I’ll start by claiming a common sense “I know it when I see it”. The number that I see in the string with my eyeballs must be the numerical value stored in the appropriate data type. “123” must be turned into the number 123.

Another criteria is that the WHOLE number must be parsed. It is not OK to stop at the first sign of trouble, and return whatever maybe is right. “123timmy” is not a number, nor is the empty string.

Failing to provide the above must be an error. Or at least as the user of the parser I must have the option to know if it happened.

First up: atol()

Input Output
123timmy 123
99999999999999999999999999999999 LONG_MAX
timmy 0
empty string 0
" " 0

No. All wrong. And no way for the caller to know anything happened.

For the LONG_MAX overflow case the manpage is unclear if it’s supposed to do that or return as many nines as it can, but empirically on Linux this is what it does.

POSIX says “if the value cannot be represented, the behavior is undefined” (I think they mean unspecified).

Great. How am I supposed to know if the value can be represented if there is no way to check for errors? So if you pass a string to atol() then you’re basically getting a random value, with a bias towards being right most of the time.

I can kinda forgive atol(). It’s from a simpler time, a time when gets() seemed like a good idea. gets() famously cannot be used correctly.

Neither can atol().

Next one: strtol()

I’ll now contradict the title of this post. strtol() can actually be used correctly. strtoul() cannot, but if you’re fine with signed types only, then this’ll actually work.

But only carefully. The manpage has example code, but in function form it’s:

bool parse_long(const char* in, long* out)
{
  // Detect empty string.
  if (!*in) {
    fprintf(stderr, "empty string\n");
    return false;
  }

  // Parse number.
  char* endp = NULL;  // This will point to end of string.
  errno = 0;          // Pre-set errno to 0.
  *out  = strtol(in, &endp, 0);

  // Range errors are delivered as errno.
  // I.e. on amd64 Linux it needs to be between -2^63 and 2^63-1.
  if (errno) {
    fprintf(stderr, "error parsing: %s\n", strerror(errno));
    return false;
  }

  // Check for garbage at the end of the string.
  if (*endp) {
    fprintf(stderr, "incomplete parsing\n");
    return false;
  }
  return true;
}

It’s a matter of the API here if it’s OK to clobber *out in the error case, but that’s a minor detail.

Yay, signed numbers are parsable!

How about strtoull()?

Unlike its sibling, this function cannot be used correctly.

The strtoul() function returns either the result of the conversion or, if  there
was  a  leading  minus sign, the negation of the result of the conversion repre‐
sented as an unsigned value

Example outputs on amd64 Linux:

Input raw Input Output raw Output
-1 -1 18446744073709551615 2^64-1
-9223372036854775808 -2^63 9223372036854775808 2^63
-9223372036854775809 -2^63-1 9223372036854775807 2^63-1
" " just spaces Error: endp not null  
-18446744073709551614 -2^64+2 2 1
-18446744073709551615 -2^64+1 1 1
-18446744073709551616 -2^64 Error ERANGE  

Phew, finally an error is reported.

This is in no way useful. Or I should say: Maybe there are use cases where this is useful, but it’s absolutely not a function that returns the number I asked for.

The title in the Linux manpage is convert a string to an unsigned long integer. It does that. Technically it converts it into an unsigned long integer. Not the obviously correct one, but it indeed returns an unsigned long.

Interesting note that an non-empty input of just spaces is detectable as an error. It’s obviously the right thing to do, but it’s not clear that this is intentional.

So check your implementation: If passed an input of all isspace() characters, is this correctly detected as an error?

If not then strtol() is probably broken too.

Maybe sscanf()?

A bit less code needed, which is nice:

bool parse_ulong(const char* in, unsigned long* out)
{
  char ch; // Probe for trailing data.
  int len;
  if (1 != sscanf(in, "%lu%n%c", out, &len, &ch)) {
    fprintf(stderr, "Failed to parse\n");
    return false;
  }

  // This never triggered, so seems sscanf() doesn't stop
  // parsing on overflow. So it's safe to skip the length check.
  if (len != (int)strlen(in)) {
    fprintf(stderr, "Did not parse full string\n");
    return false;
  }
  return true;
}
Input raw Input Output raw Output
" " just spaces Failed to parse  
-1 -1 18446744073709551615 2^64-1
-9223372036854775808 -2^63 9223372036854775808 2^63
-9223372036854775809 -2^63-1 9223372036854775807 2^63-1
-18446744073709551614 -2^64+2 2 1
-18446744073709551615 -2^64+1 1 1
-18446744073709551616 -2^64 18446744073709551615 2^64-1

As we can see here this is of course nonsense (except the first one). Extra fun that last one. You’d expect that from the two before it that it would be 0, or at least an even number. But no.

That last number is simply “out of range”, and that’s reported as ULONG_MAX.

But you cannot know this. Getting ULONG_MAX as your value could be any one of:

  1. The input was exactly that value.
  2. The input was -1.
  3. The input is out of range, either greater than ULONG_MAX, or less than negative ULONG_MAX plus one.

There is no way to detect the difference between these.

So sscanf() is out, too.

Why does this matter?

Garbage in, garbage out, right? Why does it matter that someone might give you -18446744073709551615 knowing you’ll parse it as 1?

Maybe it’s a funny little trick, like ping 0.

First of all it matters because it’s wrong. That is not, in fact, the number provided.

Maybe you’re parsing a bunch of data from a file. You really should stop on errors, or at least skip bad data. But incorrect parsing here will make you proceed with processing as if the data is correct.

Maybe some ACL only allows you to provide negative numbers, and you use this trick to make it parse as negative in some contexts (e.g. Python), but positive in others (strtoul()).

I even saw a comment saying “when you have requirements as specific as this”. As specific as “parse the number, correctly”?

It should matter that programs do the right thing for any given input. It should matter that APIs can be used correctly.

Knives should have handles. It’s fine if the knives are sharp, but no knife should be void of safe places to hold it.

It should be possible to check for errors.

Can I work around it?

You cannot even assemble the pieces here into a working parser for unsigned long.

Maybe you think you can can filter out the incorrect cases, and parse the rest. But no.

You can detect negative numbers with strtol(), range checked and all, and discard all these. But you can’t tell the difference between being off scale low between -2^64…-2^63, and perfectly valid upper half of unsigned long, 2^63-1…2^64-1.

It’s not a solution to go one integer size bigger, either. long is long long is intmax_t on my system.

So what do I do in practice?

Do you need to be able to parse the upper half of unsigned long? If not, then:

  1. use strtol()
  2. Check for less than zero
  3. Cast to unsigned long

If all you need is unsigned int, then maybe on your system sizeof(int)<sizeof(long), and this can work. Just cast to unsigned int in the last step.

Do you need the upper half? Sorry, you’re screwed. Write your own parser.

These numbers are very high, yes, and maybe you’ll be fine without them. But one day you’ll be asked to parse a 64bit flag field, and you can’t.

0xff02030405060708 cannot be unambiguously parsed by standard parsers, even though there’s ostensibly a perfectly cromulent strtoul() that handles hex numbers and unsigned longs.

Any hope for C++?

Not much, no.

C++ method std::stoul()

bool parse_ulong(const std::string& in, unsigned long* out)
{
  size_t pos;
  *out = std::stoul(in, &pos);
  if (in.size() != pos) {
    return false;
  }
  return true;
}
Input raw Input Output raw Output
" " just spaces throws std::invalid_argument  
timmy text throws std::invalid_argument  
-1 -1 18446744073709551615 2^64-1
-9223372036854775808 -2^63 9223372036854775808 2^63
-9223372036854775809 -2^63-1 throws std::out_of_range  

Code is much shorter, again, which is nice.

And std::istringstream(in) >> *out;?

Same.

In conclusion

Why is everything broken? I don’t think it’s too much to ask to turn a string into a number.

In my day job I deal with complex systems with complex tradeoffs. There’s no tradeoff, and nothing complex, about parsing a number.

In Python it’s just int("123"), and it does the obvious thing. But only signed.

Maybe Google is right in saying just basically never use unsigned. I knew the reasons listed there, but I was not previously aware that the C and C++ standard library string to int parsers were also basically fundamentally broken for unsigned types.

But even if you follow that advice sometimes you need to parse a bit field in integer form. And you’re screwed.

Read the whole story
Flameeyes
96 days ago
reply
London, Europe
Share this story
Delete
Next Page of Stories