712 stories
·
8 followers

Kees Cook: security things in Linux v4.16

1 Share

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Read the whole story
Flameeyes
9 days ago
reply
Dublin, Ireland
Share this story
Delete

WordPress, really?

1 Share

If you’re reading this blog post, particularly directly on my website, you probably noticed that it’s running on WordPress and that it’s on a new domain, no longer referencing my pride in Europe, after ten years of using it as my domain. Wow that’s a long time!

I had two reasons for the domain change: the first is that I didn’t want to keep the full chain of redirects of extremely old link onto whichever new blogging platform I would select. And the second is it that it made it significantly easier to set up a WordPress.com copy of the blog while I tweaked and set it up, rather than messing up with the domain at once. The second one will come with a separate rant very soon, but it’s related to the worrying statement from the European Commission regarding the usage of dot-EU domains in the future. But as I said, that’s a separate rant.

I have had a few people surprised when I was talking over Twitter about the issues I faced on the migration. I want to give some more context on why I went this way.

As you remember, last year I complained about Hugo – to the point that a lot of the referrers to this blog are still coming from the Hacker News thread about that – and I started looking for alternatives. And when I looked at WordPress I found that setting it up properly would take me forever, so I kept my mouth shut and doubled-down on Hugo.

Except, because of the way it is set up, it meant not having an easy way to write blog posts, or correct blog posts, from a computer that is not my normal Linux laptop with the SSH token and everything else. Which was too much of a pain to keep working with. While Hector and others suggested flows that involved GIT-based web editors, it all felt too Rube Goldberg to me… and since moving to London my time is significantly limited compared to before, so I either spend time on setting everything up, or I can work on writing more content, which can hopefully be more useful.

I ended up deciding to pay for the personal tier of WordPress.com services, since I don’t care about monetization of this content, and even the few affiliate links I’ve been using with Amazon are not really that useful at the end of the day, so I gave up on setting up OneLink and the likes here. It also turned out that Amazon’s image-and-text links (which use JavaScript and iframes) are not supported by WordPress.com even with the higher tiers, so those were deleted too.

Nobody seems to have published an easy migration guide from Hugo to WordPress, as most of the search queries produced results for the other way around. I will spend some time later on trying to refine the janky template I used and possibly release it. I also want to release the tool I wrote to “annotate” the generated WRX file with the Disqus archive… oh yes, the new blog has all the comments of the old one, and does not rely on Disqus, as I promised.

On the other hand, there are a few things that did get lost in the transition: while JetPack Plugin gives you the ability to write posts in Markdown (otherwise I wouldn’t have even considered WordPress), it doesn’t seem like the importer knows at all how to import Markdown content. So all the old posts have been pre-rendered — a shame, but honestly that doesn’t happen very often that I need to go through old posts. Particularly now that I merged in the content from all my older blogs into Hugo first, and now this one massive blog.

Hopefully expect more posts from me very soon now, and not just rants (although probably just mostly rants).

And as a closing aside, if you’re curious about the picture in the header, I have once again used one of my own. This one was taken at the maat in Lisbon. The white balance on this shot was totally off, but I liked the result. And if you’re visiting Lisbon and you’re an electronics or industrial geek you definitely have to visit the maat!



Read the whole story
Flameeyes
16 days ago
reply
Dublin, Ireland
Share this story
Delete

A critical reflection on #GDPR

1 Comment and 3 Shares

Given the activities of Cambridge Analytica as well as Facebook’s obvious inability to even comprehend what the hell people are pissed off about there is a reinvigorated push for regulating Facebook.

And it makes sense to look at regulation of supranational entities such a Facebook, Google, Amazon and whoever else is the target of the week. Because – aside from some existing fragments on some national levels – the nobody seems to have figured out a way to effectively get a handle on what these tech-giants cook up in their labs and release into the wild: Tech iterates extremely quickly making all too focused or all to specific regulation irrelevant or ineffective quicker than it can be passed by any authority or government.

In the US I do often see a push to adopt #GDPR style regulation. The GDPR (General Data Protection Regulation) is the newest approach of the European Union to regulate and structure privacy and the processing of personal data. It will become active May 25th of 2018 and forms probably the first and most modern kind of privacy regulation of its scale on this planet. Many privacy activists celebrate(d) the regulation and some were even very involved in its creation so we should have something good here, right?

But things are usually less shiny when looking closer. Which I did in 2014 in a small tumbleblog that became a German book on the subject matter. But with it all being written in this niche language I was born into and with the texts having changed somewhat I thought it would be useful to summarize some of the biggest issues I have with the GDPR. Just to provide a little bit of added context to all the voices proposing GDPR as the silver bullet for “fixing” the Internet. Also David and Yasha asked for a short summary so here it goes.

Why should you listen to this, to me? Good question. I am a computer scientist and have been dealing with regulation of IT aspects/privacy for … damn … about a decade now. I am also a certified data protection officer for the GDPR and serve in that capacity for my employer. I’ve also been invited to the German Ministry of the Interior in the time leading up to the GDPR as an expert providing context about the regulation to the government.

This text will just look at the GDPR as it is, as regulation. I don’t really want to go into whether it would have protected people against an actor such as Cambridge Analytica (because it wouldn’t have). I also don’t really want to get into whether a data protection regulation will solve the “problems with Facebook” because a) that would require me to get into those which is a whole different ballgame and b) (as it will become clearer when looking at the regulation) the focus on the 800 pound gorilla in the room hasn’t helped the regulation to become better.

With all that being said, let’s look at the GDPR.

The GDPR is a regulation that – as already mentioned above – will go live throughout all of the European Union in May 25th. While there are ways for the EU member states to amend the regulation in specific places, it supersedes all existing privacy/data protection regulation within those member states. Starting May 25th the whole EU will have a very homogenous data protection regulation.

This was also one of the driving forces of the GDPR: To make it simpler to transfer personal data between EU member states. Because that used to be a big problem with for example Germany having very different laws and regulations as say France. This is very important and often overlooked: GDPR is not supposed to make data stay where it was gathered. Its job is to guaratee a level of protection/regulation attached to data, no matter where it goes. Even if the data leaves the EU.

If you have never read the regulation (and it is actually quite readable compared to other legal texts) you can find a very convenient directory here. But don’t worry, you don’t need to read all of it for this. I’ll point out the articles I see as key issues later in detail.

The Good parts

I want to start out with some of the good parts. Because obviously not everything about this is bad, it does have (very) good aspects as well.

Many of the “Rights of the Data Subject” (as in: The person the data at hand refers to) make sense and it’s great to have them written down and formalized.

I especially find Article 15 not a bad idea in general. It outlines that a data subject has the right to ask whether someone stores data about them and if yes what kind of data and where it comes from. We can argue about specifics a lot but the general idea is solid.

Especially when it comes to scoring or other data that can have discriminatory effects Article 16 is important giving every data subject the right to have data concerning it rectified. Good approach.

The right to data portability as outlined in Article 20 is a good idea in general but given the legal framework it is embedded in and the way it is phrased it’s largely useless in its current form. But I like the idea and would have loved to see it strengthened but it stands in conflict with much of the rest of the regulation.

Artilces 24 to 40 also include many good ideas about enforcing certain standards for data processors (as in entities using data). Enforcing processors to ensure a certain level of security within the chain of data processing, clearly specifying the kind of information a data processor has to provide their data subjects are all very reasonable ideas, most of them being phrased in a generally OK way (you can always argue about details with these kinds of things).

General criticism

So while there are good things I wouldn’t have written this post if everything was fine and dandy. Let’s start looking at some problematic aspects that are hard to pin at one specific article but that keep repeating or that are underlying assumptions about the structure of the (digital) world that don’t really hold up.

Basic systematic

While the basic legal systematic (processing of personal data is illegal unless certain conditions are met) could probably be worked with, it does create sort of an uncomfortable lever for governments to declare information about natural people illegal. Processing of personal data (and that includes storing in a CMS for publication) is illegal in general. That means that the press and everybody writing about people now has to make sure that they have a strong case for publishing said data (see the part about Article 6 a few paragraphs down). If someone powerful wants to get rid of something uncomfortable, say reports about them meeting people they shouldn’t have met carrying suitcases full of we don’t know what, they could argue that they don’t consent to their location being processed and force the article to be removed. Of course press can fight this and argue for public interest or something but every article will be a potentially expensive fight.

The government Exception

Most key articles dealing with restricting the processing of personal data include exceptions for government entities. This means that instead of being a strong protection against the government (you know, the people with the guns and the prisons and the secret services) this regulation mostly targets the private sector. And that is an important area to regulate. But it’s really not enough at all. It’s basically the government pointing over your shoulder yelling “look there’s a three-headed monkey behind you” and doing whatever it wants while you are distracted. That is – especially if you follow German traditions that constituted data protection specifically as a way for people to defend against government overreach – at least surprising.

The Anti-Faceboogle Law backfiring

The law is very obviously written to target the big tech companies and the way they do business. That is obvious while reading the articles as well as while reading the reasoning for the articles. That is also  how the whole legislation is read outside of the EU (which is why people want to use it to destroy Facebook or something).

So the law creates very strong requirements to be allowed to process data and if you break the law there’s going to be immense punishment. Eat this Faceboogle!

The problem is: Facebook, Google and all the other companies do have the skills, the person-power and the resources to implement the regulation. They have the money to do all the data bookkeeping and pay data protection officers and all that. Especially with consent being the key mechanism within the regulation (we’ll come to that later) the big platforms are in a perfect position to funnel people through a consent acquisition process and get everything they need. How well to you think a new startup of 10 people will do in that regard? Do you think that some open source project running a bunch of servers to have people use a free and open social networking thing outside of Facebook have the skills and resources to comply with the regulation?

The immense effort necessary to comply and to be safe from the data protection agencies as well as lawyers affiliated with a competitor is only really manageable for the big players. Smaller startups and specifically decentralized free/open source projects will always be out of compliance. The law that was supposed to reign the US tech giants in (to allow European alternatives to flourish) does only strengthen the position of those already almost being monopolies.

What is data?

The GDPR has a very simplistic idea of what personal data is. If it refers to one identified (or identifiable) natural person, it is personal data that that person (the data subject) can control. Great.

So what is with the data connecting people? Say I am friends with X on Facebook. Is that information about me? About X? Who’s allowed to control it? Is X allowed to have a post removed (Right to be forgotten, we’ll get to that soon) that I wrote containing my opinion on X?

There is a lot of data that is clearly about one person. My bank statements are about me and my financial situation. Or are they? If I had kids they would also say a lot about their potential situation. Even my genetic code does not only say things about me but about my sister, my parents and potential offspring.

The understanding of data that the regulation is build on is so simple, so naive that for many real-world use cases the model simply doesn’t work.

Specific problematic articles

Ok. So after we’ve seen that some general ideas that GDPR is based on might be … not as well thought through as we’d like them to be, let’s look at just a handful somewhat problematic articles. There are more but I don’t want to let this thing get too long.

Article 3: Territorial scope

The GDPR is supposed to govern:

  • Every processor (company, project, etc) within EU borders
  • Every processor that also targets users/customers who reside within EU borders

While the first part is somewhat simple to figure out (if your headquarter is in Berlin, renting servers in the US won’t allow you to evade the regulation) the second part is problematic for a bunch of reasons, mostly practicality and you know national sovereignty.

What does it mean to target EU users? Is it enough to add a checkbox to the signup form making people state “I am not within the EU”? Say you are a startup in South Africa, you don’t care about the EU, you don’t think about the EU and suddenly some dude from Europe wants to fine you because a few people in Germany complained. That is a weird construct. Why should the EU parliament be allowed to decide how to regulate companies or entities it’s not legitimized to? Why does the EU parliament assume that it is allowed to override every law in the world if it feels like it?

Yes you could see my reading as overly dramatic (I am a passionate person) but I feel like Europe with its history of imperialism should maybe find better ways of dealing with international law that just saying “we know best”. The problem of conflicting legislation colliding on the Internet is neither new nor easily handled. But those questions need to be solved and not by saying “because we say so”.

Article 6: Lawfulness of processing

Article 6 is one of the absolute key articles of the regulation, it defines the reasons that can make the processing of personal data legal.

Basically: If you can point at one of the reasons you’re mostly good to go.

Some of them are kinda boring: You can process data if you are legally required to or to fulfill a contract with the data subject. Also there is saving lives (mostly hard to argue) and “public interest” (also hard to argue unless your interest is security and you want to profile your population).

The two reasons a) and f) are more interesting though. Let’s start with f) (like fuck yeah!).

f) allows you to process personal data if you have a “legitimate interest” that’s not overridden by other interests. This is the reason most ad agencies and people spamming you will fall back onto at first. Their “legitimate interest” is “informing potential customers” so they are allowed to crunch data to find these potential customers. Of course we’ll have to see the courts rolling the dice on this one but it already is quite a big door to even push profiling through.

a) fall back to consent meaning the data subject has freely consented to the data processing. You know how these things work. You want to do something, a service asks you to sign up or check a box. Boom. You just consented. Consent is specifically what the big platforms will fall back to and which is easy to acquire for them. (Unless they find a way to make processing user data part of the contract they have with the user, I do have a few ideas of how to pull that off and I am not even a lawyer).

The law wants to play hardball but in the end the ways of allowing things are so vague that it’ll end up mostly just adding a few check boxes everywhere. I don’t see a huge benefit in this to be honest. But consent has other issues as well, let’s look at it specifically.

Article 7: Conditions for consent

GDPR specifies a few criteria for legal consent to data processing. So you cannot just add a check box that’s maybe hidden and says “I allow everything, Facebook, use meeee!”.

The language is supposed to be clear and specific and consent into one thing cannot be tied to other things that are independent. That is not a stupid idea as such. But consent requires understanding.

Can I really consent into what for example Facebook or Google do with the data about me? Am I really able to fully understand what Facebook really wants to do? What the consequences can and will be?

Say Facebook (I keep using them as an example) asks me to “process my interests in order to adapt the Newsfeed to my personal preferences”. That sounds clear and specific. But is it? Not even Facebook engineers can easily say what their AI will do with my data, what kind of model of me it will create.

More and more data is processed by at least partially opaque systems (such as AI/machine learning systems) even if it’s not about profiling. And who can really understand what is being done with that data? Is consent even meaningful? Or will it stay within abstract phrases such as the one I outlined above? I fear it will.

Consent is a powerful tool in human interactions. But for digital spaces it’s not as meaningful as many people – especially privacy experts – believe it to be because the level of technical, legal and organizational competence required is really not something we can expect every person to have. Especially with companies maybe wanting users to consent into more than might be in their interest.

Consent as phrased here individualizes the data protection issue. The smart, the educated, the tech people will get their privacy and the rest won’t. Because they either don’t have the skills necessary to understand what’s going on or because they might value participation in something (and the access it provides) more. This keeps the current model of privacy as a bourgeois fantasy alive and I am not a fan to be honest.

Article 17: Right to erasure (‘right to be forgotten’)

Article 17 allows people the right to have data about them deleted. Ok, sounds fair. But there is a problem for anything journalistic and in the following to the public itself.

If this article would just be used to force a company to really delete data about me after I deleted my account or force them to really really delete the naked picture of my junk I accidentally posted we wouldn’t have a lot to discuss here. But this thing ties into the question  “What is data?” that I presented earlier. If I can get anything that talks about me deleted because it’s data about me how can others express their right to voice an opinion (even about me!)?

As my dear readers know I am not a Free Speech absolutionist like so so many tech libertarians but I do see free expression as a fundamental value (within certain bounds). This article can be used to repress anything negative about anyone.

And even if there was a question whether specific information (say “X is a serial scam artist, don’t invest into his schemes”) should stay for the public good or free speech, given the potential harsh penalties for non-compliance with the GDPR (up to 4% total worldwide annual turnover of the preceding financial year, Art. 83, Sec 5) when in doubt companies will delete. Because the risks are very high and the benefit is marginal.

The article is so broad, so powerful and fighting it so against every incentive that it is a big danger for free press and specifically those who write but might not have the legal staff of a big newspaper or publisher at hand. You like blogs? The GDPR doesn’t.

Article 20: Data portability

“Now wait tante, you scheming … schemer”, you might say, “you said Article 20 was cool! Now it’s not?” Well dear reader – who I super did not just make up – the issue is that the regulation presents a right that is pointless for most of the use cases it is supposed to address.

So I have the right to take my data out of one service in electronic form to be able to easily migrate to a different service with similar functionality. That is a great idea. Really. But what does that mean?

It’s nice that I don’t have to complete my profile writing all kinds of facts about monkeys. Awesome, saves me a lot of time. But that’s hardly what I want to do.

If I want to leave for example Facebook for some other, better, more data protecty competitor. What use is the data I get out of Facebook? Sure, I have my posts. But only those that didn’t reference some of my friends. My network graph? Is basically just data about other people I can’t just get out because it’s not just “my” data. That kind of export is pointless for migrating to other services. The social graph, the network more often that not is the value proposition.

So that is a nice right that I get. I just can’t use it for anything meaningful. Thanks.

Summary

These are some of the problems I see with the GDPR regulation as it will go live soon. Personally I can’t complain, I now can’t be fired for at least a year being data protection officer for my employer, but I hope I could show that some of the articles and ideas that the GDPR is based on are either not doing what they are supposed to do (see consent for example) or enforce existing monopolies and monocultures.

The GDPR was an important step to harmonize the law in the EU and I hope that with the people involved having changed and reality kicking the GDPR around a little bit, some of the worst issues will be fixed or at least amended in the next years. But if your plan is to regulate Facebook the GDPR won’t do too much for you. It actually does strengthen the big platform providers.

And if you look at the GDPR as a template for your own privacy laws do what you’d do with new tech: Let others experiment with the beta and wait till it’s reached version 1.1 or 1.2. Because this current version is a beta and there’s gonna be crashes and patches.

(This article is free and creative commons licensed so you can do mostly what you want with it. If you still want to support me or this work you can buy me a beverage [Paypal link to donate]. But of course you don’t have to, I appreciate you reading this)

Photo by dennis_convert

Read the whole story
tante
21 days ago
reply
The #gdpr privacy regulation is probably not as useful to #regulatefacebook as you think. Here are some issues
Oldenburg/Germany
Flameeyes
22 days ago
reply
Dublin, Ireland
Share this story
Delete

Aussie Telcos are Failing at Some Fundamental Security Basics

1 Share
Aussie Telcos are Failing at Some Fundamental Security Basics

Recently, I've witnessed a couple of incidents which have caused me to question some pretty fundamental security basics with our local Aussie telcos, specifically Telstra and Optus. It began with a visit to the local Telstra store earlier this month to upgrade a couple of phone plans which resulted in me sitting alone by this screen whilst the Telstra staffer disappeared into the back room for a few minutes:

This screen faces out into the retail store with people constantly wandering past it only a couple of meters away, well within the distance required to observe the contents off it. I've obfuscated parts of the screen above because no way, no how would I want to show this information publicly, especially my wife's password. She was pretty shocked when I showed her this as it was precisely the same verbal password as she used to authenticate to her bank. (Sidenote: she's an avid 1Password user and has been since 2011, this password dated back a couple of decades when, like most people still do today, she had reused it extensively).

I did raise this directly with Telstra to which they replied "I want to make sure that this is fully investigated, it's definitely concerning". Yet clearly, this is standard practice with the terminals the operators use specifically designed to face into the public areas of the store and the interfaces they use obviously designed to show the password (and equally obvious, the passwords are not stored as secure cryptographic hashes). That was 27 days ago and to date, there's been no follow-up from Telstra despite being told they'll "update me soon".

Then, just yesterday I saw this one from fellow Aussie techie Geoff Huntley:

As of today, Chrome will show a "Not secure" warning when an unencrypted page requests passwords or credit cards (which appears to be the case here) or when entering text into a form field. In the next few months, it will show all pages requested over an unencrypted connection as "Not secure". The risk this poses is that any intermediary able to intercept the traffic has the ability to read and modify the data (and yes, that applies to internal company networks as well).

Now, when a company is called on the presence of a glaringly obvious security omission, the correct response is to say "thank you for your feedback, we'll escalate this internally. The incorrect response is this one:

Rather than acknowledge the problem, Optus elected to send Geoff a DM asking him to remove the photo (and another similarly benign one of a terminal facing the public) because somehow, that URL in the address bar (which is merely an internal host name) constitutes their intellectual property. It's almost as though they don't want it being shown publicly...

If that was the end of it you probably wouldn't be reading this now, but rather than acknowledging that perhaps there's a problem that needs fixing, Optus stuck their fingers in their proverbial ears and started singing:

Alarmingly, this is not unprecedented and I've been blocked before myself for reporting a security incident. But it's totally unacceptable behaviour on behalf of any organisation, let alone one of our largest telcos.

The alarming thing about the way our local telco stores are physically designed is that they result in way too much leakage of sensitive personal information. Not just yours and mine either, that also includes the operators' credentials:

Just how much can you do with those credentials? Assuming you have access to an unattended terminal as I did earlier on (albeit one that was already unlocked), the mind boggles. These are not super-sophisticated security concepts either, they're fundamental basics that most organisations drill into their people: protect what's on your screen, don't allow other people to observe your password, always lock an unattended terminal.

Here's the bigger issue that concerns me in both the Telstra and Optus cases: the security of our telecommunication accounts is increasingly paramount these days. Our phone numbers are used for all sorts of identity verification processes with other services; weaknesses in telco security translate directly through to comprises of email, bank and social accounts; there are some absolute horror stories out there. Want to login to your myGov account using 2FA? They'll send you an SMS and yes, that's in addition to entering your credentials but the whole point of 2FA is that it should be resilient to credential theft!

These are not simple fixes: store layouts need changing to protect customer privacy, customer password storage is obviously insufficient, operator practices need to evolve and let's face it, SMS is a very weak means of identity verification, largely because of deficiencies on the telcos' side. But they're important issues in an era of increasing dependency on mobile and one would hope that at the very least, Telstra and Optus would seek to improve the situation rather than simply ignoring or blocking complaints from disgruntled customers.

Read the whole story
Flameeyes
28 days ago
reply
Dublin, Ireland
Share this story
Delete

Test Case

1 Comment and 4 Shares

So it finally happened: a self-driving car struck and killed a pedestrian in Arizona. And, of course, the car was an Uber.

(Why Uber? Well, Uber is a taxi firm. Lots of urban and suburban short journeys through neighbourhoods where fares cluster. In contrast, once you set aside the hype, Tesla's autopilot is mostly an enhanced version of the existing enhanced cruise control systems that Volvo, BMW, and Mercedes have been playing with for years: lane tracking on highways, adaptive cruise control ... in other words, features used on longer, faster journeys, which are typically driven on roads such as motorways that don't have mixed traffic types.)

There's going to be a legal case, of course, and the insurance corporations will be taking a keen interest because it'll set a precedent and case law is big in the US. Who's at fault: the pedestrian, the supervising human driver behind the steering wheel who didn't stop the car in time, or the software developers? (I will just quote from CNN Tech here: "the car was going approximately 40 mph in a 35 mph zone, according to Tempe Police Detective Lily Duran.")

This case, while tragic, isn't really that interesting. I mean, it's Uber, for Cthulhu's sake (corporate motto: "move fast and break things"). That's going to go down real good in front of a jury. Moreover ... the maximum penalty for vehicular homicide in Arizona is a mere three years in jail, which would be laughable if it wasn't so enraging. (Rob a bank and shoot a guard: get the death penalty. Run the guard over while they're off-shift: max three years.) However, because the culprit in this case is a corporation, the worst outcome they will experience is a fine. The soi-disant "engineers" responsible for the autopilot software experience no direct consequences of moral hazard.

But there are ramifications.

Firstly, it's apparent that the current legal framework privileges corporations over individuals with respect to moral hazard. So I'm going to stick my neck out and predict that there's going to be a lot of lobbying money spent to ensure that this situation continues ... and that in the radiant Randian libertarian future, all self-driving cars will be owned by limited liability shell companies. Their "owners" will merely lease their services, and thus evade liability for any crash when they're not directly operating the controls. Indeed, the cars will probably sue any puny meatsack who has the temerity to vandalize their paint job with a gout of arterial blood, or traumatize their customers by screaming and crunching under their wheels.

Secondly, sooner or later there will be a real test case on the limits of machine competence. I expect to see a question like this show up in an exam for law students in a decade or so:

A child below the age of criminal responsibility plays chicken with a self-driving taxi, is struck, and is injured or killed. Within the jurisdiction of the accident (see below) pedestrians have absolute priority (there is no offense of jaywalking), but it is an offense to obstruct traffic deliberately.

The taxi is owned by a holding company. The right to operate the vehicle, and the taxi license (or medalion, in US usage) are leased by the driver.

The driver is doing badly (predatory pricing competition by the likes of Uber is to blame for this) and is unable to pay for certain advanced features, such as a "gold package" that improves the accuracy of pedestrian/obstacle detection from 90% to 99.9%. Two months ago, because they'd never hit anyone, the driver downgraded from the "gold package" to a less-effective "silver package".

The manufacturer of the vehicle, who has a contract with the holding company for ongoing maintenance, disabled the enhanced pedestrian avoidance feature for which the driver was no longer paying.

The road the child was playing chicken on is a pedestrian route closed to private cars and goods traffic but open to public transport.

In this jurisdiction, private hire cars are classified as private vehicles, but licensed taxis are legally classified as public transport when (and only for the duration) they are collecting or delivering a passenger within the pedestrian area.

At the moment of the impact the taxi has no passenger, but has received a pickup request from a passenger inside the pedestrian zone (beyond the accident location) and is proceeding to that location on autopilot control.

The driver is not physically present in the vehicle at the time of the accident.

The driver is monitoring their vehicle remotely from their phone, using a dash cam and an app provided by the vehicle manufacturer but subject to an EULA that disclaims responsibility and commits the driver to binding arbitration administered by a private tribunal based in Pyongyang acting in accordance with the legal code of the Republic of South Sudan.

Immediately before the accident the dash cam view was obscured by a pop-up message from the taxi despatch app that the driver uses, notifying them of the passenger pickup request. The despatch app is written and supported by a Belgian company and is subject to an EULA that disclaims responsibility and doesn't impose private arbitration but requires any claims to be heard in a Belgian court.

The accident took place in Berwick-upon-Tweed, England; the Taxi despatch firm is based in Edinburgh, Scotland.

Discuss!

Read the whole story
Flameeyes
35 days ago
reply
Dublin, Ireland
Share this story
Delete
1 public comment
istoner
35 days ago
reply
Stross on the future of self-driving cars and crushed pedestrians:

"Indeed, the cars will probably sue any puny meatsack who has the temerity to vandalize their paint job with a gout of arterial blood, or traumatize their customers by screaming and crunching under their wheels."

Surely marketing would deem that off-brand? More likely is an ignition license requiring secret arbitration to settle the company's totally legit blood-gout complaint.

Reverse Engineering and Serial Adapter Protocols

1 Share

In the comments to my latest post on the Silicon Labs CP2110, the first comment got me more than a bit upset because it was effectively trying to mansplain to me how a serial adapter (or more properly an USB-to-UART adapter) works. Then I realized there’s one thing I can do better than complain and that is providing even more information on this for the next person who might need them. Because I wish I knew half of what I know now back when I tried to write the driver for ch314.

So first of all, what are we talking about? UART is a very wide definition for any interface that implements serial communication that can be used to transmit between a host and a device. The word “serial port” probably bring different ideas to mind depending on the background of a given person, whether it is mice and modems connected to PCs, or servers’ serial terminals, or programming interfaces for microcontrollers. For the most part, people in the “consumer world” think of serial as RS-232 but people who have experience with complex automation systems, whether it is home, industrial, or vehicle automation, have RS-485 as their main reference. None of that actually matters, since these standards mostly deal with electrical or mechanical standards.

As physical serial ports on computer stopped appearing many years ago, most of the users moved to USB adapters. These adapters are all different between each other and that’s why there’s around 40KSLOC of serial adapters drivers in the Linux kernel (according to David’s SLOCCount). And that’s without counting the remaining 1.5KSLOC for implementing CDC ACM which is the supposedly-standard approach to serial adapters.

Usually the adapters are placed either directly on the “gadget” that needs to be connected, which expose a USB connector, or on a cable used to connect to it, in which case the device usually has a TRS or similar connectors. The TRS-based serial cables appeared to become more and more popular thanks to osmocom as they are relatively inexpensive to build, both as cables and as connectors onto custom boards.

Serial interface endpoints in operating systems (/dev/tty{S,USB,ACM}* on Linux, COM* on Windows, and so on) do not only transfer data between host and device, but also provides configuration of parameters such as transmission rate and “symbol shape” — you may or may not have heard references to something like “9600n8” which is a common way to express the transmission protocol of a serial interface: 9600 symbols per second (“baud rate”), no parity, 8-bit per symbol. You can call these “out of band” parameters, as they are transmitted to the UART interface, but not to the device itself, and they are the crux of the matter of interacting with these USB-to-UART adapters.

I already wrote notes about USB sniffing, so I won’t go too much into detail there, but most of the time when you’re trying to figure out what the control software sends to a device, you start by taking a USB trace, which gives you a list of USB Request Blocks (effectively, transmission packets), and you get to figure out what’s going on there.

For those devices that use USB-to-UART adapters and actually use the OS-provided serial interface (that is, COM* under Windows, where most of the control software has to run), you could use specialised software to only intercept the communication on that interface… but I don’t know of any such modern software, while there are at least a few well-defined interface to intercept USB communication. And that would not work for software that access the USB adapter directly from userspace, which is always the case for Silicon Labs CP2110, but is also the case for some of the FTDI devices.

To be fair, for those devices that use TRS, I actually have considered just intercepting the serial protocol using the Saleae Logic Pro, but beside being overkill, it’s actually just a tiny fraction of the devices that can be intercepted that way — as the more modern ones just include the USB-to-UART chip straight onto the device, which is also the case for the meter using the CP2110 I referenced earlier.

Within the request blocks you’ll have not just the serial communication, but also all the related out-of-band information, which is usually terminated on the adapter/controller rather than being forwarded onto the device. The amount of information changes widely between adapters. Out of those I have had direct experience, I found one (TI3420) that requires a full firmware upload before it would start working, which means recording everything from the moment you plug in the device provides a lot more noise than you would expect. But most of those I dealt with had very simple interfaces, using Control transfers for out-of-band configuration, and Bulk or Interrupt1 transfers for transmitting the actual serial interface.

With these simpler interfaces, my “analysis” scripts (if you allow me the term, I don’t think they are that complicated) can produce a “chatter” file quite easily by ignoring the whole out of band configuration. Then I can analyse those chatter files to figure out the device’s actual protocol, and for the most part it’s a matter of trying between one and five combinations of transmission protocol to figure out the right one to speak to the device — in glucometerutils I have two drivers using 9600n8 and two drivers using 38400n8. In some cases, such as the TI3420 one, I actually had to figure out the configuration packet (thanks to the Linux kernel driver and the datasheet) to figure out that it was using 19200n8 instead.

But again, for those, the “decoding” is just a matter to filtering away part of the transmission to keep the useful parts. For others it’s not as easy.

0029 <<<< 00000000: 30 12                                             0.

0031 <<<< 00000000: 05 00                                             ..

0033 <<<< 00000000: 2A 03                                             *.

0035 <<<< 00000000: 42 00                                             B.

0037 <<<< 00000000: 61 00                                             a.

0039 <<<< 00000000: 79 00                                             y.

0041 <<<< 00000000: 65 00                                             e.

0043 <<<< 00000000: 72 00                                             r.

This is an excerpt from the chatter file of a session with my Contour glucometer. What happens here is that instead of buffering the transmission and sending a single request block with a whole string, the adapter (FTDI FT232RL) sends short burts, probably to reduce latency and keep a more accurate serial protocol (which is important for device that need accurate timing, for instance some in-chip programming interfaces). This would be also easy to recompose, except it also comes with

0927 <<<< 00000000: 01 60                                             .`

0929 <<<< 00000000: 01 60                                             .`

0931 <<<< 00000000: 01 60                                             .`

which I’m somehow sceptical they come from the device itself. I have not paid enough attention yet to figure out from the kernel driver whether this data is marked as coming from the device or is some kind of keepalive or synchronisation primitive of the adapter.

In the case of the CP2110, the first session I captured starts with:

0003 <<<< 00000000: 46 0A 02                                          F..

0004 >>>> 00000000: 41 01                                             A.

0006 >>>> 00000000: 50 00 00 4B 00 00 00 03  00                       P..K.....

0008 >>>> 00000000: 01 51                                             .Q

0010 >>>> 00000000: 01 22                                             ."

0012 >>>> 00000000: 01 00                                             ..

0014 >>>> 00000000: 01 00                                             ..

0016 >>>> 00000000: 01 00                                             ..

0018 >>>> 00000000: 01 00                                             ..

and I can definitely tell you that the first three URBs are not sent to the device at all. That’s because HID (the higher-level protocol that CP2110 uses on top of USB) uses the first byte of the block to identify the “report” it sends or receives. Checking these against AN434 give me a hint of what’s going on:

  • report 0x46 is “Get Version Information” — CP2110 always returns 0x0A as first byte, followed by a device version, which is unspecified; probably only used to confirm that the device is right, and possibly debugging purposes;
  • report 0x41 is “Get/Set UART Enabled” — 0x01 just means “turn on the UART”;
  • report 0x50 is “Get/Set UART Config” — and this is a bit more complex to parse: the first four bytes (0x00004b00) define the baud rate, which is 19200 symbols per second; then follows one byte for parity (0x00, no parity), one for flow control (0x00, no flow control), one for the number of data bits (0x03, 8-bit per symbol), and finally one for the stop bit (0x00, short stop bit); that’s a long way to say that this is configured as 19200n8.
  • report 0x01 is the actual data transfer, which means the transmission to the device starts with 0x51 0x22 0x00 0x00 0x00 0x00.

This means that I need a smarter analysis script that understands this protocol (which may be as simple as just ignoring anything that does not use report 0x01) to figure out what the control software is sending.

And at the same time, it needs code to know how “talk serial” to this device. Usually the out-of-bad configuration is done by a kernel driver: you ioctl() the serial device to the transmission protocol you need, the driver sends the right request block to the USB endpoint. But in the case of the CP2110 device, there’s no kernel driver implementing this, at least per Silicon Labs design: since HID devices are usually exposed to userland, and in particular to non-privileged applications, sending and receiving the reports can be done directly from the apps. So indeed there is no COM* device exposed on Windows, even with the drivers installed.

Could someone (me?) write a Linux kernel driver that expose CP2110 as a serial, rather than HID, device? Sure. It would require fiddling around with the HID subsystem a bit to have it ignore the original device, and that means it’ll probably break any application built with Silicon Labs’ own development kit, unless someone has a suggestion on how to have both interfaces available at the same time, while it would allow accessing those devices without special userland code. But I think I’ll stick with the idea of providing a Free and Open Source implementation of the protocol, for Python. And maybe add support for it to pyserial to make it easier for me to use it.


  1. All these terms make more sense if you have at least a bit of knowledge of USB works behind the scene, but I don’t want to delve too much into that. [return]
Read the whole story
Flameeyes
36 days ago
reply
Dublin, Ireland
Share this story
Delete
Next Page of Stories