800 stories
·
8 followers

Temperature monitoring

1 Share
Xiaomi Mijia Temperature Sensor

Xiaomi Mijia Temperature Sensor

I've been having some temperature problems in my house, so I wanted to set up some thermometers which I could read from a computer, and look at trends.

I bought a pack of three cheap Xiaomi IoT thermometers. There's some official Xiaomi tooling to access them from smartphones and suchlike, but I wanted something more open. The thermometers have some rudimentary security on them to try and ensure you use the official tooling. This is pretty weak, and the open-source Home Assistant (HA) has support for querying them. I wasn't already running HA and it looked to do more than I needed right now.

gathering

A friend told me that it was trivial to write custom firmware to the devices. It's so easy you can do it from a web-based flasher: in fact, anyone in range can. There's a family of custom firmwares out there, and most move the sensors readings into the BTLE announce packets, meaning, you can scrape the temperature by simply reading and decoding the announcement packets, no need to handshake at all, and certainly no need to navigate Xiaomi's weird security. This is the one I used.

I hacked up a Python script to read the values with the help of this convenience library1.

Next, I needed to set up somewhere to write the values.

reporting

The study is thankfully cooler today

The study is thankfully cooler today

It's been long enough since I last looked at something like this that the best in class software was things like multi router traffic grapher, and rrdtool, or things that build on top of them like Munin. The world seems to have moved on (rightly or wrongly) with a cornucopia of options like Prometheus, Grafana, Graphite/Carbon, InfluxDB, statsd, etc.

I ruled most of these out as being too complex for what I want to do, and got something working with Graphite (front-end) and Carbon (back-end). I was surprised that this wasn't packaged in Debian, and opted to try the Docker container. This works, although even that is more complex than I need: it's got graphite and carbon, but also nginx and statsd too; I'm submitting directly to carbon, side-stepping statsd entirely. So as I refine what I'm doing I might possibly strip that back.

next steps

I might add more sensors in my house! My scripts also need a lot of tidying up. But, I think it would be useful to add some external temperature data, such as something from a Weather service. I am also considering pulling in some of the sensor data from the Newcastle University Urban Observatory, which is something I looked at a while ago for my PhD but didn't ultimately end up using. There are several temperature sensors nearby, but they seem to operate relatively sporadically.

There's a load of other interesting sensors in my vicinity, such as air quality monitors.

I'm currently ignoring the humidity data from the sensors but I should collect that too.

It would be useful to mark relevant "events", too: does switching on or off my desktop PC, or printer, etc. correlate to a jump in temperature?


  1. I might get rid of that in the future as I refine my approach
Read the whole story
Flameeyes
50 days ago
reply
London, Europe
Share this story
Delete

Old Posts, Still Relevant

1 Share

This is going to be a bit unusual post, but I effectively ran out of finished posts for a couple more weeks. The reasons are to be found in my previously-announced COVID experience, which as I said sucked – the last post I wrote in the middle of it turned out to be a much worse rambling mess than usual – and the fact that work had me on tight deadline for most of the summer.

So instead of writing something new, I’m gathering some of the topical commentary that I left on other venues that link to a number of old (sometimes very old) posts of mine. It’s going to be a very link heavy post, rather than an usual “essay”, but hopefully it will also bring out some previously buried posts of mine.

GitLab, Self Hosting, and FLOSS Cooperatives

GitLab was a darling in many FLOSS spaces because they are not affiliated with Microsoft, but in the past few weeks they have been through a huge storm when The Register reported on their plans to delete inactive repositories.

As usually happens when a hosting provider realises they can’t afford to stay around forever (this happened before, and will keep happening), there’s a vocal minority of FLOSS people who will try to convince authors and maintainers that the only option to survive is to run their own infrastructure.

Unfortunately, despite the cries of “the cloud is just someone else’s computer” (“The bakery is just someone else’s oven”), there’s a lot of things that are also someone else’s problem when you use a solution provided by a third party. Maintaining a solid infrastructure, particularly for more complex projects, is very time consuming, particularly when you want to not depend on ready-made solutions.

The last time this topic came out, I wrote that in my opinion what we need is FLOSS Cooperatives, but just as back then I don’t think it’s going to be a feasible option: the moment when money is involved, there are commitments to expect and respect, and given that the comparison would be with staffed and funded solutions such as GitHub, it would take quite a bit of money and userbase to maintain a 24/7 SLO — to the point of competing with paid solutions from companies such as GitLab as well.

To plug more of my previous writing, this is also what I would like to see more of, in terms of non-profits (or maybe B Corporations?) rather than focusing nearly only on privacy, as FSFE appeared to do.

Hyperboles, Personality, and Books

I have a strong dislike for cults of personality in all forms, and have been over time applied the maxim «Follow principles, not people» which, funnily enough, I heard from a person I wouldn’t follow to the bathroom. That appears to make me a renegade in Tech, where everyone appears to accept the words of their heroes with little questioning.

A couple of weeks ago, Mikko Hypponen released a book, titled If It’s Smart, It’s Vulnerable — catchy name, catchy premise, and someone who appears to be widely accepted as being smart. Maybe the premise applies to people as well. I honestly felt annoyed by the amount of uncritical noise in social media over the book, although I admit I am not going to read the book, because I do not believe that Hypponen should be lent the credibility for it.

I’m not trying to argue that he doesn’t have the experience, or the insight, to know what he’s talking about. I’m arguing that just at the end of last year he amplified a silly take about smart thermostats, because it fit his narrative. The same narrative that this book appears to be making front and centre.

This is where for me credibility falls: there’s significant problems with the way current “throwaway” smart devices are deployed and sold, that we don’t need to create fake takes around them. Scaring users won’t help if we are actually trying to help the public.

The whole situation reminded me of how I similarly stay away from Doctorow: much as his early tech coverage has been instrumental at pointing out privacy problems that many had up to them ignored, either out of self interest or simple ignorance, his later takes have been hyperbolic, in my opinion just feeding the caricature of privacy advocates as tinfoil hat wearing weirdos. Case in point? The figurative “literally” when misrepresenting Abbott’s takedown.

Midwife To the Container Revolution

I stumbled across a awkwardly phrased, 13 years old post of mine, which I found it quite fascinating to look at: it was written at a time when I was still finding it interesting to play around with PAM, and with complicating my single-user system to build an understanding of how to secure multi-user systems as well.

That post predates the systemd announcement by a number of months, but it talks about concepts that systemd made popular and effectively omnipresent even on non-systemd Linux installations nowadays, such as /run and its user-specific directories. I do not know if I happened to make the same discovery as Lennart or if he was vaguely inspired by my experiments – we used to chat a lot for a long while, since I was packaging PulseAudio among others – but at the very least I can see that I wasn’t too far off the mark on those concepts.

It wasn’t the only time. The year prior I noted the memory wasted by parsing pci.ids files at runtime. Eventually, the hardware IDs database became a binary format that could be directly mapped from the filesystem. And user services, which again systemd implements nicely nowadays, were basically drafted in February 2009. Again, I don’t expect to have been the direct source for the ideas, but at least I can say that I was sensing a need of some kind.

As I was reflecting on these posts, I joked that I sometimes refer to myself as the midwife of the container revolution. Nowadays everything appears to be using Docker (that was first released in 2013) or a variation thereof, but the Gentoo Tinderbox I ran moved to containers (based on LXC) all the way back in 2009!

You can indeed see that I had a lot of content early on about containers, and I was active in lxc-devel when the project was still managed by IBM. Gentoo Linux was an early easy target to support as a container guest among other reasons because I needed it to be for the tinderbox to run successfully. I can’t take the merit of having made containers a mainstream technology, but I have had my hands dirty in the process.

Similarly, while Roy deserves all the credit for OpenRC, I feel like I had a bit of a part to play in that success as well: what became OpenRC started as part of baselayout2, and it was separated explicitly to make it easier to use in Gentoo/FreeBSD, which was the first project I worked on in Gentoo. And indeed, while Roy is now possibly better known for being a NetBSD developer, he was the original member of Gentoo/FreeBSD/SPARC64, and got hooked on NetBSD while trying to make Gentoo/NetBSD a thing. Roy is awesome, if you didn’t know that!

Closing Thoughts

Have you read something you like on the blog? Please, share it with others! In this world and age it seems like the only way to be heard is to have spicy hot takes and stir up controversy, but personally I don’t have the energy to follow that.

Read the whole story
Flameeyes
56 days ago
reply
London, Europe
Share this story
Delete

Upscaling and an Important Note About Photo “AI”

1 Share
One of these is a photo. One is a digital illustration.
John Scalzi

Because I’m a digital photography nerd, I have a lot of programs and Photoshop plugins designed to tweak photos and make them better, or, maybe more accurately, less obviously bad. One of the hot new sectors of digital photography programs is the one where “Artificial Intelligence” is employed to do all manner of things, including colorizing, editing and upscaling. Some of this is baked into Photoshop directly — Adobe has a “Neural Filters” section for this — while other companies are supplying standalone programs and plugins.

Truth be told, all of these companies have been touting “AI” for a while now. But in the last couple of iterations of these tools and programs, there’s been a real leap in… well, something, anyway. The quality of the output of these tools has become strikingly better.

As an example, I present to you the before and after picture above. The original picture on the left was a 200-pixel-wide photo of Athena as a toddler. There had been a larger version of it way back when, but I had cropped it way down for an era when monitors were 640 x 480, and then tossed or misplaced the original photo. So the blocky, blotchy low-resolution picture of my kid is the only one I have now. The picture on the right is a 4x upscaling using a program called Topaz GigaPixel AI, which takes the information from the original picture, and using “AI,” makes guesses at what the picture should look like at a higher resolution, then applies those guesses. In this case, it guessed pretty darn well.

Which is remarkable to me, because even just a couple iterations of the GigaPixel program back, it wasn’t doing that great of a job to my eye — it could smooth out jagged edges on photos just fine, but it was questionable on patterns and tended to make a hash of faces. Its primary utility was that it could do “just okay” attempts at upscaling much faster than I could do that “just okay” work on my own. This iteration of the program, however, does better than “just okay,” more frequently than not, and now does things well beyond my own skill level.

It’s still not perfect; some other pictures of Athena from this era that I upsampled didn’t quite guess her face correctly, so she didn’t look as much like she actually did at the time, and more like a generic toddler. But that generic toddler looked perfectly reasonable, and not like a machine-generated mess. That counts as an improvement.

Now, it’s important to acknowledge a thing about these new “AI”-assisted pictures, which is that they are no longer photographs. They’re something different, closer to a digital illustration than anything else. The upscaled picture of Athena here is the automated equivalent of an artist making an airbrushed painting of my kid based on a tiny old photo. It’s good, and it’s pretty accurate, and I’m glad I have a larger version of that tiny image. But it’s not a photograph anymore. It’s an illustrated guess at what a more detailed version of the photograph would have been.

Is this a problem? Outside of a courtroom, probably not. But it’s still worth remembering that the already-extremely-permeable line between photograph and illustration is now even more so. Also, if you weren’t doing so already, you should treat any “photo” you see as an illustration until and unless you can see the provenance, or it’s from a trusted source. This is why, incidentally, AP and most other news organizations have strict limits on how photos can be altered. I’d guess that a 4x “AI”-assisted enhancement would fall well outside the organization’s definition of acceptable alteration. So, you know, build that into your world view. In a world of social media filters turning people into cats or switching their gender presentation, this internalization may not be as much of a sticking point as it once was.

With that said, it’s still a pretty nifty thing, and I will play with it a lot now, especially for older, smaller digital pictures I have, and to (intentionally) make illustrations that are based from those upscaled originals. I’m glad to have the capability. And that capability is only going to get more advanced from here.

— JS

Read the whole story
Flameeyes
68 days ago
reply
London, Europe
Share this story
Delete

Plans open a disused railway bridge to pedestrians

1 Share

A section of the Thames with few bridges could become a lot easier for pedestrians and cyclists to cross if plans to convert a disused railway bridge for pedestrian use go ahead.

(c) Moxon Architects

The disused bridge crosses the Thames at Barnes, which may confuse some people as the bridge there is in daily use by trains. That’s because little noticed by most people, this is actually two bridges. The railway bridge in use today was built in the 1890s, as a replacement for an earlier cast iron bridge that was built in 1849. That older disused bridge sits right next to the railway bridge, even though few people realise they are two separate structures.

There is an existing walkway on the live railway bridge side, but it’s narrow and lacks any step-free access options. It’s also, in theory, not open to cyclists.

A plan, supported by both councils on either side of the river is to open up the disused bridge as a wider pedestrian and cycle route, with gentle gradient slopes on either side to provide an accessible and pleasant way to cross the river. Another benefit is that on the southern side, the slope up to footbridge will also offer step-free access to the outward-bound platform at Barnes Bridge station that’s next to the railway bridge.

As the bridge will be open to cyclists, to discourage speeding while maintaining at least 2 metres of width along the route, the footpath meanders around planters and integrated seating.

(c) Moxon Architects

The south side in Barnes is largely residential, while the north side in Hounslow is mostly fields and sports facilities. The river walk on the north side is also being upgraded at the moment with a new pedestrian path under the railway bridge to make that route easier to use. The architects who developed that new pedestrian link are the same as the ones working on this new project to open up the disused railway bridge, Moxon Architects, so they’re familiar with the area.

They also have support from Network Rail to carry out the plans.

The organiser’s official website says that the new bridge will also offer views of the annual Boat Race, as the existing narrow footbridge is closed on Boat Race day to prevent overcrowding.

The current estimate is that the project will cost around £3 million to complete. The bulk of the costs are for the step-free access at either end of the railway bridge, and then there’s landscaping work, moving some power cables from the live railway and restoring a Victorian turnstile at the Hounslow end. Studies have already been carried out on the structure, so they don’t anticipate any huge surprises there.

Subject to securing the funding, they expect to open the disused railway bridge to the public in 2026.

This article was published on ianVisits

SUPPORT THIS WEBSITE

This website has been running now for just over a decade, and while advertising revenue contributes to funding the website, but doesn't cover the costs. That is why I have set up a facility with DonorBox where you can contribute to the costs of the website and time invested in writing and research for the news articles.

It's very similar to the way The Guardian and many smaller websites are now seeking to generate an income in the face of rising costs and declining advertising.

Whether its a one-off donation or a regular giver, every additional support goes a long way to covering the running costs of this website, and keeping you regularly topped up doses of Londony news and facts.

If you like what your read on here, then please support the website here.

Thank you

Read the whole story
Flameeyes
69 days ago
reply
London, Europe
Share this story
Delete

London Underground’s mobile phone coverage to expand by end of this year

1 Share

The expansion of mobile phone coverage in the London Underground is expanding, with five stations confirmed to go live within the next six months.

Leaky feeder cable for phone signals being installed (c) TfL

At the moment, coverage for all mobile networks is available in the Jubilee line tunnels between Westminster and Canning Town, and TfL has confirmed that Bank, Oxford Circus, Tottenham Court Road, Euston, and Camden Town will be the first stations to gain coverage as part of the network expansion.

The delivery of mobile phone coverage on the London Underground was signed via a concession agreement so that the cost of installing it will be funded by BAI Communications at no cost to TfL, while TfL will also earn revenue from the contract over its 20-year lifespan. Since the contract was signed, BAI has been upgrading the Jubilee line’s trial network to a permanent one before then expanding phone coverage across the rest of the London Underground.

Once those first five stations are live, TfL says that further sections of the tube network will go live by summer 2023 – including stations across the City and West End on the Central line. TfL and BAI are also continuing to progress with delivering mobile coverage across the recently opened central section of the Elizabeth line between Paddington and Abbey Wood.

At the moment, although all mobile networks support coverage in the Jubilee line, that’s a legacy of the trial agreement, and only Three and EE had previously signed agreements to expand their coverage across the rest of the London Underground. TfL has now confirmed that both Vodafone and O2 have also signed up to support expanded coverage on the rest of the tube.

In addition, it’s been announced that the Wi-Fi network in the stations, which was originally installed by Virgin Media will be transferred to BAI to operate on behalf of TfL from next April.

Shashi Verma, Chief Technology Officer at TfL, said: “I’m delighted that all four major mobile operators are set to provide high-speed, uninterrupted 4G coverage on the Tube. We are working hard with BAI Communications to get the next stations completed by the end of the year so our customers can benefit as soon as possible.”

All stations and tunnels across the Tube network are expected to have mobile coverage by the end of 2024.

BAI’s neutral host mobile network will also host the new Emergency Services Network (ESN), which will give first responders immediate access to life-saving data, images and information in live situations and emergencies on the frontline.

Across the wider Connected London programme, BAI anticipates investing more than £1 billion on establishing a backbone of mobile and digital connectivity for London. A full-fibre network will also be delivered that will connect to buildings and street assets, like traffic lights and lampposts that house small mobile transmitter cells to improve 4G and 5G phone coverage.

This article was published on ianVisits

SUPPORT THIS WEBSITE

This website has been running now for just over a decade, and while advertising revenue contributes to funding the website, but doesn't cover the costs. That is why I have set up a facility with DonorBox where you can contribute to the costs of the website and time invested in writing and research for the news articles.

It's very similar to the way The Guardian and many smaller websites are now seeking to generate an income in the face of rising costs and declining advertising.

Whether its a one-off donation or a regular giver, every additional support goes a long way to covering the running costs of this website, and keeping you regularly topped up doses of Londony news and facts.

If you like what your read on here, then please support the website here.

Thank you

Read the whole story
Flameeyes
89 days ago
reply
London, Europe
Share this story
Delete

7 simple bot detection methods that won’t inconvenience users

1 Share

Millions of (poorly coded) bots relentlessly crawl the web to detect and spew junk content into any form they find. The go-to countermeasure is to force everyone to complete a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). CAPTCHAs are those annoying user-hostile tests where you type in skewed letters or identify objects in photos. They require cultural familiarity, introduce accessibility barriers, and waste everyone’s time. Instead of using a CAPTCHA, you can detect and block many bot submissions using completely unobtrusive form validation methods.

Read more …



Read the whole story
Flameeyes
113 days ago
reply
London, Europe
Share this story
Delete
Next Page of Stories