Hacker News new | past | comments | ask | show | jobs | submit login
Many of the root certificates on Windows are not needed (hexatomium.github.io)
255 points by svenfaw on Oct 19, 2020 | hide | past | favorite | 113 comments



For folks discovering this: Unfortunately, this isn’t a good idea, and can seriously harm your system.

I’ll be the first to tell you that I believe Google and Mozilla have done a lot for supervising TLS, but realize that the trust stores contain many other non-TLS purposes, and with the CAs constrained from TLS issuance (e.g. only trust for S/MIME).

CryptoAPI, for its warts, is actually beautifully engineered, in that Microsoft has had the ability to add arbitrary constraints and properties to certificates from the very first release (via CERT_PROP_IDs). You don’t really see these in the UI; you only find out about them in WinCrypt.h, or via debugger stepping. However, it means that even if the UI is showing “trusted for TLS”, Microsoft may have disabled trust for TLS via the extended properties, which their APIs respect, and which are delivered through authroots.cab

You can see a little bit about this at https://github.com/crtsh/certwatch_db/issues/69 and http://unmitigatedrisk.com/?p=259

Approaches like the OP link fail to take that into consideration, and can easily break core OS services, non-TLS cases like code signing or S/MIME, or even reintroduce trust in CAs that Microsoft has programmatically disabled.


> CryptoAPI, for its warts, is actually beautifully engineered, in that Microsoft has had the ability to add arbitrary constraints and properties to certificates from the very first release (via CERT_PROP_IDs). You don’t really see these in the UI; you only find out about them in WinCrypt.h, or via debugger stepping. However, it means that even if the UI is showing “trusted for TLS”, Microsoft may have disabled trust for TLS via the extended properties, which their APIs respect, and which are delivered through authroots.cab

I mean no disrespect, but if you need to look through header files or use step-through debugging to actually find out what constraints apply to a certificate, especially when the ones actually reported for the certificate don't actually apply in reality then "beautifully engineered" doesn't seem like the right adjective to use.

edit: I don't see this as a documentation issue; when the only interface you provide the user says one thing but stepping through the code reveals that that's entirely not true for certain items in certain circumstances, then having a better ReadMe is not the issue here.


As someone who's actually had to use the CryptoApi, I think it's unpleasant to use, and often the documentation isn't good enough. There is also usually at least 4 different ways of getting to the same result, which makes things very confusing. And some features are only supported from Windows 7/10.

"Beautifully engineered" is not the phrase that comes to mind.


CryptoApi is remarkably flexible and also really hard to use for a bunch of reasons:

1. It does a lot of things. It's an OS API for talking to cryptographic hardware, including the TPM and smart cards, in addition to the API for doing run of the mill cryptographic operations using the OS's implementation. That's on top of serving as a primary way to load/use certificates (through the Ncrypt API, but often you can't entirely avoid dipping into CryptoAPI).

2. It's really, really low level, meaning that you have to directly deal with the intersection of driver/hardware behaviors, feature matrices, and data structure serialization/packing.

3. It's actually two API's merged into the same set of function calls: CAPI, and CNG. Depending on which API is supported for your usage or supported by the hardware drivers, the behavior of the same API call might be subtly different.

4. It's a C API with the classic Win32 API structure packing conventions, including the really fun conventions like "allocate the buffer, pack the data structure, and set the offsets yourself without an API to do it for you" convention, and also the "call a function with a null data pointer to get the size of the buffer you need, and then allocate that buffer and call it again to actually do the work" convention.

5. Because it's so low level and nobody uses it, there's very little documentation on how to troubleshoot errors when you do things wrong, and the API is complex enough that you will do it wrong, and then you will spend a few days just trial-and-error-ing your way out of that hole. There are a lot of places in the API's where in order to know how to do something, you need to look up the behavior of the component you're talking to on the device/provider side of the API, but those providers are often under-documented.

Most developers should try to avoid using it because it's so easy to mess up. The API is extremely powerful, but it's also more of a footgun than it should be.


I know it's not usually a thing to reply to your own comment, but I wanted to add an important detail.

The reason for point #1 is that doing all of these things in one API is a useful framing of the problem. For example, if you want to just use a certificate or cryptographic key pair regardless of whether the private key is in a file or stored in a hardware module, CryptoAPI unifies that interface so that, if you do it right, you have two orthogonal pieces of code: one that locates the key to use and which adapter to use to talk to the underlying software/hardware, and another to request cryptographic operations on an abstracted key pair. The fact that you're talking to pure software or an actual HSM can be abstracted from the code that needs to sign a challenge.

The API is powerful enough that its abstractions work even for cloud based key storage. For example, AWS Cloud HSM support for CryptoAPI: https://docs.aws.amazon.com/cloudhsm/latest/userguide/ksp-li...


It could certainly be the case that it's poorly designed, but it's not necessarily the case just because they've hidden the feature from the UX.

First, the backend constraints may be ideal despite whether they've bothered exposing any of them in the UI. Second, not exposing them in the UI may have been an explicit design decision and not just trade-offs. It's possible that a root certificate list was deemed complicated enough without adding a zillion more hard-to-debug toggles that might allow curious users to accidentally break something in a way that would be very confusing, and only allowing programmatic access was deemed to be best because likely the settings would only be used by enterprises managing their fleet or some such.


Even if the policy data is absent from UI for legitimate reasons, if the policy can't be programmatically extracted, the documentation can't be trusted.

That is, the policy should be a declarative object interpreted by a well-tested engine, or a small simple bit of logic, not a crawl through the engine code looking for 'if' statements.


Eh, my point was just that:

  - Policy is largely encapsulated on the certificate properties in the root store
    - Local Policy is implemented via registry keys, with somewhere like 100+ odd registry keys (... many undocumented, as they implement customer-specific features whose documentation is provided under a MSFT support agreement)
  - Policy is a mix of documented (via WinCrypt.h) and undocumented flags
    - The reason for not documenting some of this is that they’re flags that Microsoft may or may not support; that is, they’re implementation details
    - CAPI provides, by design, several ways to fully replace their policies, either for single trust purposes or for all
  - The UI exposes a suitable “general purpose” expression
I can totally understand the complaint that this isn’t clear in the UI, and as others have noted, sometimes that’s intentional.

I can also totally understand the complaint that this isn’t all meticulously documented. Not everyone can ring up their pet Microsoft engineer to get documentation and show how it connects to a problem that Microsoft benefits from helping us solve (the linked-to crt.sh bug, which as a side-effect helps provide greater automation for Microsoft and the CAs they supervise). However, Microsoft also, from the get-go, designed it to be extensible so you could replace this.

Importantly, Microsoft has been able to roll out new features, and remove trust in CAs, without major re-engineering work. That was the “beautiful engineering” part of my comment. They defined a stable API that could be easily extended, or even wholesale replaced, in the Windows 2000/XP era. That API has brought considerable improvements WITHOUT requiring rewriting your code for Windows Vista, 7, 8, or 10 to leverage that. That’s a considerable difference from some other libraries and tools; LibreSSL was hugely constrained fixing OpenSSLs broken chain building, Mozilla Firefox/NSS has undergone three complete and separate rewrites from the ground up of the engine, Apple deprecated dozens of APIs when they ported iOS’s verifier to macOS, etc. As an engineer, 20+ years of stability IS impressive!


Having recently implemented some new product features using several parts of the CryptoAPI, my experience agrees. There were several instances of taking days of dead-ends to get something to work the way you wanted. Eventually finding the correct way through trial and error or some obscure example code on the internet from 10 years ago. But once you cracked it, the code and API felt like it would be rock-solid for decades to come.

It’s a mess of discoverability and documentation, but as someone else in this thread said, beautifully engineered.


> Mozilla Firefox/NSS has undergone three complete and separate rewrites from the ground up of the engine

Can you provide more details? I'm only aware of one of these rewrites...


1. Legacy

2. Trust Domains (half completed, but littered throughout the code as nss3 prefix). This was being lead by Sun and stopped when Oracle acquired them.

3. libpkix: This was done by porting Java code to C using preprocessor macros to simulate exception handling. It implemented path discovery, and not just verification, and was used by Firefox for EV processing (AIUI), and was always used by Chrome on Linux/ChromeOS (until recently replaced with the Chromium built-in verifier)

4. mozilla::pkix, which started off as Brian Smith’s insanity::pkix rewrite of a minimalist path builder/verifier. I can’t remember if this launched while Brian was still at Mozilla or after he had left, but Brian would later take the approach he used for insanity::pkix when writing the Rust webpki project ( https://github.com/briansmith/webpki )

Each of the above APIs had significantly different interfaces for controlling verification. The trust domain stuff wasn’t as visible, because it was only half-completed.


Ah good, so mozilla::pkix is the newest one and I don't have to worry about that being replaced by something else instead.


I suppose it’s a question of whether you count “documentation” as engineering.

It is elegantly complex and featureful. And (sometimes intentionally) poorly documented, as much of that implementation is seen as “an implementation detail”.


Especially when it comes to API's, documentation is 100% part of the engineering. Very few non-trivial API's are self-explanatory because you have to map a complex problem space to an imperfect engineering-focused formulation of the problem domain, and the mapping usually involves introducing design patterns to make the problem space easier to manage in code.


It can be beautifully engineered yet poorly documented. It happens to quite a lot of software actually.


I think that more documentation needs to give context about design decisions/principles and why they were made. And hopefully the software is sticking to some principles. If you can learn a small number of orthogonal principles, you can predict the behavior of the software without consulting the documentation, ideally


The approach Microsoft took here encourages something we both I think agree is a bad idea, roots used for multiple potentially conflicting purposes. The owner of the multi-purpose root will either make a big fuss when policy changes down the road make that conflict painful or worse they'll persuade themselves that we can't really have meant what we just told them because it's so painful, "obviously" the policy change doesn't apply to their multi-purpose root and they'll ignore it until discovered.

If for example Microsoft had separate "Code signing" and "Web PKI" trust programmes then you get two good things for the price of one: Less incentive for CAs to do things we'll regret even if they don't; and clarity for users about the purpose of the trust relationship.

Mozilla could do better here too, it's actually not easy to discern from Firefox which built-in CA roots are actually trusted in the Web PKI, and it's outright impossible as far as I know to discover any of the special case restrictions described in https://wiki.mozilla.org/CA/Additional_Trust_Changes from the user interface of the software itself.


> roots used for multiple potentially conflicting purposes

Wouldn't this lead to a proliferation of protocol/purpose-specific roots, to the extent that there might be so many critical roots that administrators become careless about vetting requests to add a root, or audit logs that a root is added?

It feels like the solution is a better UI to visualize the many-to-many relationship between roots and protocols/purposes, rather than requiring every root to be single-purpose.


I think you’re right for questioning the end-state, but you may be missing the current status quo.

The current status quo of multi-purpose roots is that supervision of a root, by a browser, auditor, and the CA themselves, constantly has to consider all the purposes a certificate is trusted for when considering the effective policies and design.

The classic example I give is that multiple CAs would have policies like “If it’s intended for TLS server auth, we put the domain name in the CN field and make sure it’s a real domain name. If it’s for document signing, we put the company name in the CN field and make sure it’s a real company name”, and with no further distinction between the two. So what happens when you have a company named “pets.com” - did they validate the domain or not?

What about if they misissued a TLS certificate. For TLS, you can quantify the impact, and you can move to distrust the CA, only thinking about how TLS works in your application. But what if they also issued certificates for lawyers used in judicial proceedings, and rely on the CA being trusted in the OS? This was the exact problem with DigiNotar, and why it took Microsoft over a month to fully respond, where Google and Mozilla took days.

We see similar issues with audits, and CAs having sub-CAs not “intended” to issue a particular type of certificate, but totally doing so.

For lack of a better analogy, and since earlier in this post I was talking about API design, the choice of singular trust purposes is like the S in SOLID. You want a single responsibility, clear, and do that well. It makes it easier to change and evolve that API when you don’t have to worry about breaking 30 odd unrelated use cases. And it lets you narrowly focus the supervision to the task at hand.


Pretty bad that “trusted” could mean blocked and it was only possible to know this in Windows 10


So it’s hidden magic, yet the EULA states you are responsible for choosing which roots to trust.


If you can't even remove CAs from the trust store without breaking it, then it doesn't fucking work, does it?


I love how the author asks the reader to implicitly distrust the OS vendor, but then also to implicitly trust their random software. Seems legit to me.


Exactly this. If this was an EFF tool maybe I would trust it.

But a binary hosted on GitHub? Especially since GitHub is whitelisted by major browsers so you would not get a reputational warning when downloading a binary. This is exactly the place where a malware vendor would put it's stuff to avoid scary browser warnings.

Also, privacy tools are perfect targets for intelligence agencies, hackers, ... since they are used by people with something valuable (bitcoin, information, ...) so you need to be extra-careful around them.


What browser is this?

Chrome will try its best to not let me download .zip files containing .exe files.


Ive never heard of an exemption on virus warnings. If I upload an exe to a github repo and try to download it, chrome will warn me that it's not commonly downloaded. Same for Google drive - even if it's scanned for viruses, chrome might throw a warning that requires you go to the downloads page and allow it.


Distrust Amazon Root CA 1? This is the root CA for a lot of AWS API gateways. Waaaaay Overkill.


Those selected in the screenshot are included in the Mozilla list, thus to be trusted. They instruct to distrust the inversion. Here's quote from the instruction:

    2. Right-click the selection and click Invert selection
    3. Right-click the selection and click Distrust


Firefox doesn't trust the root CA for a lot of AWS API gateways?


The fact that some root stores/browsers don't trust some CAs is actually quite common. There will be some cross-sign from another CA that is trusted in Firefox in this case.

Stuff like this is quite common; we published a paper on this recently if you are interested in details: https://arxiv.org/abs/2009.08772


https://medium.com/@sleevi_/path-building-vs-path-verifying-... also has some utilities to visualize this using JS to explore these relationships, and understand the code tradeoffs.


For reference. I can’t actually find a list of Google’s right now.

Mozilla: https://wiki.mozilla.org/CA/Included_Certificates

EDIT: and yes, it would have been nice for the author to link directly to these... Google clearly has their own, but obviously there’s some debate about what that means.


Chrome uses the OS level cert store, whereas Mozilla uses it's own.


This is configurable for both, I believe. It's common for enterprises to switch Firefox to use the OS store to support internal certificates.


Chrome on non-ChromeOS can't be configured with its own cert store. It just isn't there, though they keep a list of some black-listed root CAs.

https://www.chromium.org/Home/chromium-security/root-ca-poli...


You can easily import cert's into OS stores. I've actually rarely seen Firefox as a first option, more likely as a secondary option.

You're likely going to need to manipulate the OS store regardless, so Firefox is just extra work.


I think you misinterpreted his comment. He was saying Firefox by default uses its own store, so enterprises switch it to use the OS store when they need it to use internal certs.


Specifically Mozilla provides a flag which has the effect of accepting the additional corporate trust only.

So e.g. if Microsoft decides for $1B cash they trust Honest Abe's Totally American For-Profit Church O'Certs and ships a Windows 10 update to enable that, Firefox still won't accept Abe's bogus google.com cert even with the enterprise mode on. But if your head of IT decides they'd very much like to issue certificates for internal-test-server.example and pushes a new root CA from their laptop via Group Policy, Firefox in enterprise mode will trust that cert because it was local policy.

Basically the idea is Mozilla won't substitute Microsoft's decisions for their own, but your own local certificate policies are different. So you can opt into the latter in Firefox, but not the former.


Ah, that's probably the more reasonable interpretation, thanks.



Great podcast episode which discusses Mozilla/Google's CA lists, as well as the story of Gervase Markham (who recently passed) working on making the first fully open and transparent root program at Mozilla.

https://darknetdiaries.com/transcript/3/


What do you mean by Google? Google Chrome uses whatever the operating system provides. Nothing specific in Chrome. Firefox uses their own store.


Chrome on ChromeOS has had its own trust store for ages. Chrome on Android similarly uses Google’s Android store.



Is that not Android's list? Or does Google ship those on all platforms it runs on?


So people are just allowed to post ads for closed source software now?


Yes, it's called SaaS, and their "ad-blogs" make up about 20% of all the top stories on HN. Startups that charge you to use their proprietary software but write good clickbait or Show HNs. (I'm not hating, I'm just saying this is not surprising in any way)


I always wondered how painful it would be to clear out my root store and only re-enable root CAs as I needed them. Would I need more than 5% of them?


I did this in Firefox many years ago and, to a lesser extent, on Windows and Android. Firefox and Windows changes caused minor, obvious problems which I was able to resolve quickly. On Android I found several apps were so badly programmed they simply displayed blank screens, or showed other strange failures, when the required SSL certs were distrusted. Steam, Naver Line and tumblr were the three that I recall fell under this. There were a few apps which immediately threw a human-readable error about certificates, and hence were easier to resolve.


Experience would probably be similar to using NoScript in a browser. Lots of broken things at first, then tapering off.


For example: redundancy if any of those is revoked, exposed, compromised.


How does Apple's iOS/macOS trust store look, by comparison?


The list can be found here: https://support.apple.com/en-us/HT210770


I don't know how to check for iOS, but macOS is showing 164 trusted certificates for 10.15.7. Via the "Keychain Access.app", select the "System Roots" item, then select all of the items in the right panel, then open the context menu using the two-finger tap method (or control + mouse click)

If there's a better way to check this, feel free to post


file:///System/Library/Security/Certificates.bundle/Contents/Resources/TrustStore.html

Your method (164 like you) and this one (213), yield different results on my system (10.15.7).

EDIT: my bad

TrustStore.html also contains Untrusted & Blocked Certificates.


Use this command to extract the list of root certificates:

    security find-certificate -a -p > all-certs.pem
You can then dump each certificate:

    split -p "-----BEGIN CERTIFICATE-----" all-certs.pem cert-
    openssl x509 -text -in cert-aa
    openssl x509 -text -in cert-ab
    ...


OK, so let's continue this trend and we end-up trusting just one. So we concentrate the entire internet power into just one hand. Because it's such a good idea, right?


Why? Just because there is a slippery slope argument to be made doesn't mean we have to actually do what it says needs to be done to get to its theoretical end point.

Sure, we _could_ remove everything until there's one left, but why would we? If you just want a smaller, publicly vetted list of roots that you can trust because the org that curates that set is explicitly on a mission to keep the internet free, open, and safe for all, then there is no "next step", this is the first and also last one.

Is it a good idea to do this? Highly depends on what you do and what you need, but the idea that "we can just keep doing this until there's only one" is super flaky: how many more sets of root certs, vetted by orgs that can be trusted to have privacy and security in mind, are there? Because no org that fits that bill will have a list that is "just the one". There's going to be 100+ certs in their lists.


In case you missed, my comment was a sarcasm


There is no anonymous sarcasm on the internet, unless you tell people up front. If not, there's only text at face value.


Well, the situation now is you trust 300 CAs and use one (figuratively). Apart from that, these are root CAs meaning any single one of them could compromise every TLS connection, right? So increasing the number of root CAs doesn't divide their power.


Are 2 evil hands better than 1? What's needed are more good hands.


There are no good hands. Governments are slow to catch up but they do eventually. So they did catch up with technology and understanding that internet is freedom. Our freedom was these 25 years we had from 95', when proliferation of internet started up to today. Lockdown even more showed they need tight control over our freedom.

Unfortunately the future is splinternet, where each country has its own grip on this new media medium called internet, same as they do through regulation over TV and radio.


" Windows trusts 322 root certificates issued by 122 different organizations"

I think this is a good point I would never trust 122 people in my life with my bank-account details. But if I trust my browser it seems that is exactly what I'm doing.

Isn't there a better way?


You do trust one company with thousands of employees though, there's layers/levels of trust on this


I don't think the high number of CAs is as big a risk as it sounds like.

1. To exploit this, the attacker needs to be in a man-in-the-middle position and compromise a CA

2. Certificate transparency helps with detecting and banning misbehaving CAs

3. CAs are audited to have processes in place which make it difficult for individuals to misbehave

4. In case of a large scale attack of this kind, banks are relatively likely to refund the money (unlike money stolen by a person you authorized)

5. The individuals employed by CAs are not anonymous and at risk of legal prosecution

I'd be more worried about DNS/registrar level attacks, since CAs generally assume that whoever has control over a domain is authorized to issue certificates for it. I doubt registrars are anywhere near as security conscious as CAs.

And then there is the dependency hell of random software having hundreds of indirect dependencies they didn't verify which are maintained by random, often anonymous people. On most desktop OSs any such application can steal your banking credentials.


No, to exploit this, you just need to either 1) work inside a CA, 2) be a spy and infiltrate the CA, 3) use your nation-state powers to force a CA in your borders to do what you want, or 4) be a clever hacker hack a CA. There are "only" 330 different complex organizations in probably a hundred-plus countries to try, so I guess this is pretty unlikely.

But even if it is, you can actually hack the entire system with just a BGP hack. Today, if you issue some false BGP routes, you can MITM LetsEncrypt (or one of the other 329 CAs) to sign a valid cert for you. It's not hard because validation is automated and depends on proving you own IP space... and IP space is just what your router's BGP table says it is at the moment you issue a cert.

I would not depend on the kindness of banks giving back money on very large scales. They care about themselves more than they care about you.

You're right, DNS registrar and nameserver attack is another great attack and usually they're pretty vulnerable, so that's another good option.


I see there are safeguards in place. But is it true that it is enough to compromise one CA to compromise my browser? Or would a hacker have to compromise ALL of the 322 certificates?

If the latter then having more certificates would simply mean it is more secure. If the former it would mean having more certificates makes it less secure.

Or is it something in between? Do additional certificates add to the security, or diminish from it?


It is - generally speaking - enough for the attacker to compromise one CA. Every CA can issue certificates for all domains, with some exceptions where the browser knows which CA should be issuing the certificate. Google chrome has a hardcoded list of high value targets, IIRC. All of that is ignored when you have a local CA trust root, though, like many enterprises do (for their TLS intercepting proxies)

And for those CAs, even the safeguards as certificate transparency are moot. As a consultant, I never accept to install one of those CA certs in my trust store. I’d rather go and use a VM or a dedicated machine.


The former. To be clear though they’d have to steal a root certificate AND intercept the traffic between your browser and any sites you visit.


> Isn't there a better way?

There's always a better way. But when you're going up against gigantic incumbents with way more power than everyone else, you don't get the better way. You get the way the powerful decide it will work. It's like politics, or economics. When a system is so large and unwieldy, significant change can't come unless the ridiculously powerful minority forces it. If they have no reason to do so...


The Problem, with distrusting them all Root CAs and accepting it back into the fold one-by-one, is simply that you don’t know if that trusting Root CA in question has signed off some other but malicious sub-CA(s).


How is this worse than the status quo?


Hence the general trend (in the not too far off future) would be to issue an intermediate CA to each website, and even sooner a Root CA for each website.

Not sure if that defeats the purpose of a Root CA, no?


Interestingly, Debian (in the ca-certificates package) only has 122. These are from Mozilla AFAIK.


All of the Free Unixes ship roughly the Mozilla trusted CA list yes. Historically lots of them notionally had independent programmes, but they didn't actually do the maintenance/ oversight work necessary, so it's like if they'd just shipped Linux 1.3.48 forever. 1.3.48 was a pretty good release at the time, but er, not five years later. Just auto-applying Mozilla's changes is better.

However this isn't entirely an apples-to-apple comparison. That package in Debian is for the Web PKI, CAs trusted to issue specifically for DNS names (and rarely IP addresses) for services on TLS, thus mostly the web but also IMAP, HTTP and other protocols that can use TLS. That's it, no other purposes are warranted.

Whereas Microsoft ships roots trusted for several other purposes their users care about, including code signing for both application software and drivers, and document signing (used in some jurisdictions to give legal effect to a digital signature on a document, not to be confused with electronic signatures which are just an image of a written signature) and so on.


fun related certificate fact, TIL that Linkedin's share does not seem to support any of the dutch government certificates - https://www.linkedin.com/post-inspector/inspect/https:%2F%2F...

(I wish they wouldn't bother with all the EV certificates and just accept LetsEncrypt. So much manual work for broken certificates that need to be replaced whenever the CA messes up the trust bits or Iran hacks the certificate providers again)


In direct contrast to this article, I've had to specifically set `security.enterprise_roots.enabled` to true on Firefox because completely legitimate sites were not trusted.


Note that this config flag specifically only applies to the roots installed in your local systems that are not trusted by Microsoft.

So "completely legitimate" these sites may be, but they aren't publicly trusted by anybody, it's just that somebody used Group Policy in your organisation to trust the issuer. So configuring your Firefox to copy other browsers and accept this is fine, but you should go into it eyes open.


It sounds like either your antivirus or your company is breaking your SSL.


How does this compare to MacOS? I haven't gone through Apple's default CA roots.


IMG this is lame and can break your system. Also creates a single point of failure if one of this, now super certificates, is revoked


What super certificates? What exactly is the new single point of failure?


I don't know anything about this company (rootIQ) but it sounds like a juicy, high-value target.


How so? It looks like this is their only software product, and it doesn't look like they issue certificates.


If you can compromise the tool that people use to change their certificate stores, then you can sneak in your own root certificates.


Any application that has administrator/root privileges should be able to do that.


The rot started when browsers started including root certs as trusted by default.


As opposed to what? Using the OS's trust store? Getting the user to manually trust CAs?


When browsers default-trusted one CA, others companies, quite reasonably, wanted a piece of the action and asked to be included too. Time passed and we, the general public, are no longer part of the conversation about who should be default-trusted.

What if the browsers (and OS) didn't default-trust anyone? The various CA companies would have to include the general public in their public relations. We'd see CAs advertising for users to trust their roots. If a CA ever screwed up, they would feel it.

Another part of the rot is that TLS certs can only have one root CA. If I decided I don't trust DigiCert, I would effectively cut myself off from Hacker News, because a site can only have one root CA. If a site could have multiple CAs, I would still be able to connect so long as I trusted one.


>What if the browsers (and OS) didn't default-trust anyone? The various CA companies would have to include the general public in their public relations. We'd see CAs advertising for users to trust their roots.

The mozilla CA program for instance, requires potential CAs to go under audits. Is what you're proposing (requiring CAs to hire an advertising/PR department to convince users) really an improvement? Bad companies can lie in either case, but at least the former feels like less waste of resources. Furthermore, forcing the user to make this decision significantly increases the social engineering risk. If the average users have to add a few dozen CAs every time they get a new computer, it will be very easy to social engineer them to add one more. See for instance, social engineering users into downloading and running malware payloads by saying they need to "update their flash player" or "install a codec".

>If a CA ever screwed up, they would feel it.

Nope, the too big-to-fail problem will still exist. Facebook fucked up, yet we're all still using it. Consumer boycotts rarely work. At least with browser vendors there's a unified bargaining group for users. Good luck getting measures like distrusting symantec or limiting certificate validities to 398 days via grassroots action.


> If a CA ever screwed up, they would feel it.

CAs have screwed up in the real world. And the result of those screw-ups were removal from root cert lists. And they closed down as a result. DigiNotar and WoSign both faced this calamity.

There was a time when root stores let anyone in who asked nicely. That time has long since passed however; the modern process is quite involved. See Mozilla's process here: https://wiki.mozilla.org/CA/Application_Process, and note that it calls out that it's expected to take two years to complete. It's not a pro forma policy: the Federal PKI [1] was not able to meet the requirements of inclusion into the root store because its rules for issuing certificates are not considered sufficient per modern browser requirements.

Quite frankly, I'd place far more trust in Mozilla's ability to audit CAs than my ability to do it myself, and I'd trust the average user's ability even less than myself. Arguing that CAs having to advertise to the user is a good idea for trust ignores the fact that unsavory companies openly abuse their users and yet still have high trust. (To say nothing of politicians...)

[1] This is the PKI system that's mandatory within the US government for internal certificates.


How would the general public make informed decisions about root CAs if the people paid to do it for living have problems doing it?


How would the general public create a collaborative online encyclopedia if the people paid to do it for a living have problems doing it?

How would the general public create an open source operating system if the people paid to do it for a living have problems doing it?


The "general public" isn't creating an online encyclopedia. If everyone using it edited it, it would most definitely suck.

The "general public" definitely isn't writing an open source operating system. That would be an even bigger disaster.

If you want to, you can contribute to both, just as you can edit your trusted certs list. But making that the default would be awful.


Better than that, if you have useful contributions to public oversight of the Web PKI you can contribute to m.d.s.policy.

https://groups.google.com/g/mozilla.dev.security.policy

For example m.d.s.policy is currently considering an application from NAVER Business Platform, a South Korean outfit and I'm quite sure that a native Korean IT person could usefully contribute to the discussion.


Writing one paragraph in an encyclopedia entry is a manageable task for anyone with a college education. Picking a complete and secure set of CA roots is not. To begin with, if your paragraph is wrong, it can be fixed. If your CA set is wrong, someone steals your credit card numbers.


We seem better off having the browser vendors bring the hammer down on CAs that are issuing bad certificates. Regular people don't know enough to make valid decisions about which CAs to trust, and I don't think they should have to know either.


That would be terrible. Regular users would either not be able to access anything or trust everything.


Currently they trust everything in fact.

Why should I trust Digicert for example? When did I make that decission?

What does 'trust' mean if the decission is made by others?


You trust Firefox (Mozilla), Mozilla trusts Digicert. If you don't trust Mozilla to make good security decisions, switch browsers. If you want to second-guess this particular decision, you can adjust your Firefox configuration.


As a developer what I find frustrating is that it is so difficult to make my browser trust 127.0.0.1.

Shouldn't there be an easy way to configure it to trust that?


What does it mean to trust an IP address? If you found that a link took you to gmail.com on 127.0.0.1:8716, would you be fine with providing your gmail credentials to that site?


I would think I can trust anything on 127.0.0.1 because that can only be my local machine, right?

If there is something running on 127.0.0.1:8716 which I have not given permission to run then my machine is compromised already. No?


https://1.1.1.1/ is a thing. certificate used: https://crt.sh/?id=1044327786


If the site provides a trusted certificate for gmail.com, things are fine. IP shouldn't matter, port probably will.


AFAIK http://localhost is treated the same as https://localhost so you shouldn't need a self signed certificate.


https://localhost doesn't work without a self-signed certificate...


Sure, but you can use http://localhost, and it will be treated as a secure origin


Some Oauth providers require https (even for localhost), and if I'm using WebAuthn, I have to have a certificate.


But what would WebAuthn for localhost even mean ?

The credentials in WebAuthn are bound to an FQDN (typically the name of the web server but e.g. news.ycombinator.com would be entitled to ask for WebAuthn credentials for ycombinator.com) so it's not as though this is irrelevant.

I can imagine a few dozen extra lines defining a special allowance for localhost in the WebAuthn spec., but then you're also building a bunch of special backend code to handle that too and for what?

I built a toy WebAuthn implementation to understand it better, but I did it on my vanity site, and I don't feel like it would really have been easier without.


> When browsers default-trusted one CA

And when was this, precisely? The copy of Netscape 1.0 I looked at didn't even have a UI to configure CAs.


Considering Netscape was released in December 1994 and SSL came out in February 1995, there were a lot of things still in infancy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: