U bent hier

Let's encrypt

Abonneren op feed Let's encrypt
Let's Encrypt is a free, automated, and open certificate authority brought to you by the nonprofit Internet Security Research Group (ISRG). Read all about our nonprofit work this year in our 2023 Annual Report.
Bijgewerkt: 31 min 7 sec geleden

Deploying Let's Encrypt's New Issuance Chains

12 april 2024 - 2:00am

On Thursday, June 6th, 2024, we will be switching issuance to use our new intermediate certificates. Simultaneously, we are removing the DST Root CA X3 cross-sign from our API, aligning with our strategy to shorten the Let’s Encrypt chain of trust. We will begin issuing ECDSA end-entity certificates from a default chain that just contains a single ECDSA intermediate, removing a second intermediate and the option to issue an ECDSA end-entity certificate from an RSA intermediate. The Let’s Encrypt staging environment will make an equivalent change on April 24th, 2024.

Most Let’s Encrypt Subscribers will not need to take any action in response to this change because ACME clients, like certbot, will automatically configure the new intermediates when certificates are renewed. The Subscribers who will be affected are those who currently pins intermediate certificates (more on that later).

The following diagram depicts what the new hierarchy looks like. You can see details of all of the certificates on our updated Chain of Trust documentation page.

New Intermediate Certificates

Earlier this year, Let’s Encrypt generated new intermediate keys and certificates. They will replace the current intermediates, which were issued in September 2020 and are approaching their expiration.

All certificates - issued by both RSA and ECDSA intermediates - will be served with a default chain of ISRG Root X1 → (RSA or ECDSA) Intermediate → End-Entity Certificate. That is, all certificates, regardless of whether you choose to have an RSA or ECDSA end-entity certificate, will have one intermediate which is directly signed by the ISRG Root X1, which is Let’s Encrypt’s most widely trusted root.

The new ECDSA intermediates will also have an alternate chain to ISRG Root X2: ISRG Root X2 → ECDSA Intermediate → End-Entity Certificate. This is only applicable to a small number of Subscribers who prefer the smallest TLS handshake possible. To use this ECDSA-only chain, see your ACME client’s documentation on how to request alternate chains. There will not be any alternative chains for the RSA intermediates.

It is important to note that there will now be multiple active RSA and two active ECDSA intermediates at the same time. An RSA leaf certificate may be signed by any of the active RSA intermediates (a value from “R10” to “R14” in the issuer common name field of your certificate), and an ECDSA leaf certificate may be signed by any of the active ECDSA intermediates (“E5” through “E9”). Again, your ACME client should handle this automatically.

A Certificate Authority’s intermediate certificates expire every few years and need to be replaced, just like a website’s certificate is routinely renewed. Going forward, Let’s Encrypt intends to switch what intermediates are in use annually, which will help enhance the overall security of the certificates.

Removing DST Root CA X3 Cross-sign

The new intermediate chains will not include the DST Root CA X3 cross-sign, as previously announced in our post about Shortening the Let’s Encrypt Chain of Trust. By eliminating the cross-sign, we’re making our certificates leaner and more efficient, leading to faster page loads for Internet users. We already stopped providing the cross-sign in the default certificate chain on February 8th, 2024, so if your ACME client is not explicitly requesting the chain with DST Root CA X3, this will not be a change for you.

ECDSA Intermediates as Default for ECDSA Certificates

Currently, ECDSA end-entity certificates are signed by our RSA intermediates unless users opted in via a request form to use our ECDSA intermediates. With our new intermediates, we will begin issuing all ECDSA end-entity certificates from the ECDSA intermediates. The request form and allow-list will no longer be used, which we had introduced to make ECDSA intermediates available.

Earlier, the default ECDSA chain included two intermediates: both E1 and the cross-signed ISRG Root X2 (i.e. ISRG Root X1 → ISRG Root X2 → E1 → End-Entity Certificate). After the change, it will contain only a single intermediate: the version of one of our new ECDSA intermediates cross-signed by ISRG Root X1 (i.e. ISRG Root X1 → E5 → End-Entity Certificate). This ensures that all of our intermediates, both RSA and ECDSA, are signed directly by our most widely-trusted ISRG Root X1.

We expect this change to benefit most users with smaller TLS handshakes. If compatibility problems with ECDSA intermediates arise, we recommend Let’s Encrypt users switch to RSA certificates. Android 7.0 is known to have a bug preventing it from working with most Elliptic Curve (EC) certificates, including our ECDSA intermediates; however, that version of Android doesn’t trust our ISRG Root X1 and thus is already incompatible.

Risks of Pinning or Hard-Coding Intermediates

We do not recommend pinning or otherwise hard-coding intermediates or roots. Pinning intermediates is especially not advisable as they change often. If you do pin intermediates, make sure you have the complete set of new intermediates (available here).

Questions?

We’re grateful for the millions of subscribers who have trusted us to carry out best practices to make the web more secure and privacy-respecting, and rotating intermediates more frequently is one of them. We’d also like to thank our great community and the funders whose support makes this work possible. If you have any questions about this transition or any of the other work we do, please ask on our community forum.

We depend on contributions from our supporters in order to provide our services. If your company or organization can help our work by becoming a sponsor of Let’s Encrypt please email us at sponsor@letsencrypt.org. We ask that you make an individual contribution if it is within your means.

New Intermediate Certificates

19 maart 2024 - 1:00am

On Wednesday, March 13, 2024, Let’s Encrypt generated 10 new Intermediate CA Key Pairs, and issued 15 new Intermediate CA Certificates containing the new public keys. These new intermediate certificates provide smaller and more efficient certificate chains to Let’s Encrypt Subscribers, enhancing the overall online experience in terms of speed, security, and accessibility.

First, a bit of history. In September, 2020, Let’s Encrypt issued a new root and collection of intermediate certificates. Those certificates helped us improve the privacy and efficiency of Web security by making ECDSA end-entity certificates widely available. However, those intermediates are approaching their expiration dates, so it is time to replace them.

Our new batch of intermediates are very similar to the ones we issued in 2020, with a few small changes. We’re going to go over what those changes are and why we made them.

The New Certificates

We created 5 new 2048-bit RSA intermediate certificates named in sequence from R10 through R14. These are issued by ISRG Root X1. You can think of them as direct replacements for our existing R3 and R4 intermediates.

We also created 5 new P-384 ECDSA intermediate certificates named in sequence from E5 through E9. Each of these is represented by two certificates: one issued by ISRG Root X2 (exactly like our existing E1 and E2), and one issued (or cross-signed) by ISRG Root X1.

You can see details of all of the certificates on our updated hierarchy page.

Rotating Issuance

Rotating the set of intermediates we issue from helps keep the Internet agile and more secure. It encourages automation and efficiency, and discourages outdated practices like key pinning. “Key Pinning” is a practice in which clients — either ACME clients getting certificates for their site, or apps connecting to their own backend servers — decide to trust only a single issuing intermediate certificate rather than delegating trust to the system trust store. Updating pinned keys is a manual process, which leads to an increased risk of errors and potential business continuity failures.

Intermediates usually change only every five years, so this joint is exercised infrequently and client software keeps making the same mistakes. Shortening the lifetime from five years to three years means we will be conducting another ceremony in just two years, ahead of the expiration date on these recently created certificates. This ensures we exercise the joint more frequently than in the past.

We also issued more intermediates this time around. Historically, we’ve had two of each key type (RSA and ECDSA): one for active issuance, and one held as a backup for emergencies. Moving forward we will have five: two conducting active issuance, two waiting in the wings to be introduced in about one year, and one for emergency backup. Randomizing the selected issuer for a given key type means it will be impossible to predict which intermediate a certificate will be issued from. We are very hopeful that these steps will prevent intermediate key pinning altogether, and help the WebPKI remain agile moving forward.

These shorter intermediate lifetimes and randomized intermediate issuance shouldn’t impact the online experience of the general Internet user. Subscribers may be impacted if they are pinning one of our intermediates, though this should be incredibly rare.

Providing Smaller Chains

When we issued ISRG Root X2 in 2020, we decided to cross-sign it from ISRG Root X1 so that it would be trusted even by systems that didn’t yet have ISRG Root X2 in their trust store. This meant that Subscribers who wanted issuance from our ECDSA intermediates would have a choice: they could either have a very short, ECDSA-only, but low-compatibility chain terminating at ISRG Root X2, or they could have a longer, high-compatibility chain terminating at ISRG Root X1. At the time, this tradeoff (TLS handshake size vs compatibility) seemed like a reasonable choice to provide, and we provided the high-compatibility chain by default to support the largest number of configurations.

ISRG Root X2 is now trusted by most platforms, and we can now offer an improved version of the same choice. The same very short, ECDSA-only chain will still be available for Subscribers who want to optimize their TLS handshakes at the cost of some compatibility. But the high-compatibility chain will be drastically improving: instead of containing two intermediates (both E1 and the cross-signed ISRG Root X2), it will now contain only a single intermediate: the version of one of our new ECDSA intermediates cross-signed by ISRG Root X1.

This reduces the size of our default ECDSA chain by about a third, and is an important step towards removing our ECDSA allow-list.

Other Minor Changes

We’ve made two other tiny changes that are worth mentioning, but will have no impact on how Subscribers and clients use our certificates:

  • We’ve changed how the Subject Key ID field is calculated, from a SHA-1 hash of the public key, to a truncated SHA-256 hash of the same data. Although this use of SHA-1 was not cryptographically relevant, it is still nice to remove one more usage of that broken algorithm, helping move towards a world where cryptography libraries don’t need to include SHA-1 support at all.

  • We have removed our CPS OID from the Certificate Policies extension. This saves a few bytes in the certificate, which can add up to a lot of bandwidth saved over the course of billions of TLS handshakes.

Both of these mirror two identical changes that we made for our Subscriber Certificates in the past year.

Deployment

We intend to put two of each of the new RSA and ECDSA keys into rotation in the next few months. Two of each will be ready to swap in at a future date, and one of each will be held in reserve in case of an emergency. Read more about the strategy in our December 2023 post on the Community Forum.

Not familiar with the forum? It’s where Let’s Encrypt publishes updates on our Issuance Tech and APIs. It’s also where you can go for troubleshooting help from community experts and Let’s Encrypt staff. Check it out and subscribe to alerts for technical updates.

We hope that this has been an interesting and informative tour around our new intermediates, and we look forward to continuing to improve the Internet, one certificate at a time.

We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. We ask that you make an individual contribution if it is within your means.

Introducing Sunlight, a CT implementation built for scalability, ease of operation, and reduced cost

14 maart 2024 - 1:00am
Logo for Sunlight

Let’s Encrypt is proud to introduce Sunlight, a new implementation of a Certificate Transparency log that we built from the ground up with modern Web PKI opportunities and constraints in mind. In partnership with Filippo Valsorda, who led the design and implementation, we incorporated feedback from the broader transparency logging community, including the Chrome and TrustFabric teams at Google, the Sigsum project, and other CT log and monitor operators. Their insights have been instrumental in shaping the project’s direction.

CT plays an important role in the Web PKI, enhancing the ability to monitor and research certificate issuance. The operation of a CT log, however, faces growing challenges with the increasing volume of certificates. For instance, Let’s Encrypt issues over four million certificates daily, each of which must be logged in two separate CT logs. Our well-established “Oak” log currently holds over 700 million entries, reflecting the significant scale of these challenges.

In this post, we’ll explore the motivation behind Sunlight and how its design aims to improve the robustness and diversity of the CT ecosystem, while also improving the reliability and performance of Let’s Encrypt’s logs.

Bottlenecks from the Database

Let’s Encrypt has been running public CT logs since 2019, and we’ve gotten a lot of operational experience with running them, but it hasn’t been trouble-free. The biggest challenge in the architecture we’ve deployed for our “Oak” log is that the data is stored in a relational database. We’ve scaled that up by splitting each year’s worth of data into a “shard” with its own database, and then later shrinking the shards to cover six months instead of a full year.

The approach of splitting into more and more databases is not something we want to continue doing forever, as the operational burden and costs increase. The current storage size of a CT log shard is between 5 and 10 terabytes. That’s big enough to be concerning for a single database: We previously had a test log fail when we ran into a 16TiB limit in MySQL.

Scaling read capacity up requires large database instances with fast disks and lots of RAM, which are not cheap. We’ve had numerous instances of CT logs becoming overloaded by clients attempting to read all the data in the log, overloading the database in the process. When rate limits are imposed to prevent overloading, clients are forced to slowly crawl the API, diminishing CT’s efficiency as a fast mechanism for detecting mis-issued certificates.

Serving Tiles

Initially, Let’s Encrypt only planned on building a new CT log implementation. However, our discussions with Filippo made us realize that other transparency systems had improved on the original Certificate Transparency design, and we could make our logs even more robust and scalable by changing the read path APIs. In particular, the Go Checksum Database is inspired by Certificate Transparency, but uses a more efficient format for publishing its data as a series of easily stored and cached tiles.

Certificate Transparency logs are a binary tree, with every node containing a hash of its two children. The “leaf” level contains the actual entries of the log: the certificates, appended to the right side of the tree. The top of the tree is digitally signed. This forms a cryptographically verifiable structure called a Merkle Tree, which can be used to check if a certificate is in the tree, and that the tree is append-only.

Sunlight tiles are files containing 256 elements each, either hashes at a certain tree “height” or certificates (or pre-certificates) at the leaf level. Russ Cox has a great explanation of how tiles work on his blog, or you can read the relevant section of the Sunlight specification. Even Trillian, the current implementation of CT we run, uses a subtree system similar to these tiles as its internal storage.

Unlike the dynamic endpoints in previous CT APIs, serving a tree as tiles doesn’t require any dynamic computation or request processing, so we can eliminate the need for API servers. Because the tiles are static, they’re efficiently cached, in contrast with CT APIs like get-proof-by-hash which have a different response for every certificate, so there’s no shared cache. The leaf tiles can also be stored compressed, saving even more storage!

The idea of exposing the log as a series of static tiles is motivated by our desire to scale out the read path horizontally and relatively inexpensively. We can directly expose tiles in cloud object storage like S3, use a caching CDN, or use a webserver and a filesystem.

Object or file storage is readily available, can scale up easily, and costs significantly less than databases from cloud providers. It seemed like the obvious path forward. In fact, we already have an S3-backed cache in front of our existing CT logs, which means we are currently storing our data twice.

Running More Logs

The tiles API improves the read path, but we also wanted to simplify our architecture on the write path. With Trillian, we run a collection of nodes along with etcd for leader election to choose which will handle writing. This is somewhat complex, and we believe the CT ecosystem allows a different tradeoff.

The key realization is that Certificate Transparency is already a distributed system, with clients submitting certificates to multiple logs, and gracefully failing over from any unavailable ones to the others. Each individual log’s write path doesn’t require a highly available leader election system. A simple single-node writer can meet the 99% Service Level Objective of a CT log.

The single-node Sunlight architecture lets us run multiple independent logs with the same amount of computing power. This increases the system’s overall robustness, even if each individual log has lower potential uptime. No more leader election needed. We use a simple compare-and-swap mechanism to store checkpoints and prevent accidentally running two instances at once, which could result in a forked tree, but that has much less overhead than leader election.

No More Merge Delay

One of the goals of CT was to have limited latency for submission to the logs. A design feature called Merge Delay was added to support that. When submitting a certificate to a log, the log can return a Signed Certificate Timestamp (SCT) immediately, with a promise to include it in the log within the log’s Maximum Merge Delay, conventionally 24 hours. While this seems like a good tradeoff to not slow down issuance, there have been multiple incidents and near-misses where a log stops operating with unmerged certificates, missing its maximum merge delay, and breaking that promise.

Sunlight takes a different approach, holding submissions while it batches and integrates certificates in the log, eliminating the merge delay. While this leads to a small latency increase, we think it’s worthwhile to avoid one of the more common CT log failure cases.

It also lets us embed the final leaf index in an extension of our SCTs, bringing CT a step closer to direct client verification of Merkle tree proofs. The extension also makes it possible for clients to fetch the proof of log inclusion from the new static tile-based APIs, without requiring server-side lookup tables or databases.

A Sunny Future

Today’s announcement of Sunlight is just the beginning. We’ve released software and a specification for Sunlight, and have Sunlight CT logs running. Head to sunlight.dev to find resources to get started. We encourage CAs to start test submitting to Let’s Encrypt’s new Sunlight CT logs, for CT Monitors and Auditors to add support for consuming Sunlight logs, and for the CT programs to consider trusting logs running on this new architecture. We hope Sunlight logs will be made usable for SCTs by the CT programs run by the browsers in the future, allowing CAs to rely on them to meet the browser CT logging requirements.

We’ve gotten positive feedback so far, with comments such as “Google’s TrustFabric team, maintainers of Trillian, are supportive of this direction and the Sunlight spec. We have been working towards the same goal of cacheable tile-based logs for other ecosystems with serverless tooling, and will be folding this into Trillian and ctfe, along with adding support for the Sunlight API.”

If you have feedback on the design, please join in the conversation on the ct-policy mailing list, or in the #sunlight channel on the transparency-dev Slack (invitation to join).

We’d like to thank Chrome for supporting the development of Sunlight, and Amazon Web Services for their ongoing support for our CT log operation. If your organization monitors or values CT, please consider a financial gift of support. Learn more at https://www.abetterinternet.org/sponsor/ or contact us at: sponsor@abetterinternet.org.

A Year-End Letter from our Vice President

28 december 2023 - 1:00am
Sarah Gran

This letter was originally published in our 2023 Annual Report.

We typically open our annual report with a letter from our Executive Director and co-founder, Josh Aas, but he’s on parental leave so I’ll be filling in. I’ve run the Brand & Donor Development team at ISRG since 2016, so I’ve had the pleasure of watching our work mature, our impact grow, and I’ve had the opportunity to get to know many great people who care deeply about security and privacy on the Internet.

One of the biggest observations I’ve made during Josh’s absence is that all 23 people who work at ISRG fall into that class of folks. Of course I was a bit nervous as Josh embarked on his leave to discover just how many balls he has been keeping in the air for the last decade. Answer: it’s a lot. But the roster of staff that we’ve built up made it pretty seamless for us to keep moving forward.

Let’s Encrypt is supporting 40 million more websites than a year ago, bringing the total to over 360 million. The engineering team has grown to 12 people who are responsible for our continued reliability and ability to scale. But they’re not maintaining the status quo. Let’s Encrypt engineers are pushing forward our expectations for ourselves and for the WebPKI community. We’ve added shorter-lived certificates to our 2024 roadmap. We’re committing to this work because sub-10 day certificates significantly reduce the impact of key compromise and it broadens the universe of people who can use our certs. In addition, the team started an ambitious project to develop a new Certificate Transparency implementation because the only existing option cannot scale for the future and is prone to operational fragility. These projects are led by two excellent technical leads, Aaron Gable and James Renken, who balance our ambition with our desire for a good quality of life for our teams.

Prossimo continues to deliver highly performant and memory safe software and components in a world that is increasingly eager to address the memory safety problem. This was evidenced by participation at Tectonics, a gathering we hosted which drew industry leaders for invigorated conversation. Meanwhile, initiatives like our memory safe AV1 decoder are in line to replace a C version in Google Chrome. This change would improve security for billions of people. We’re grateful to the community that helps to guide and implement our efforts in this area, including Dirkjan Ochtman, the firms Tweede golf and Ferrous Systems, and the maintainers of the many projects we are involved with

Our newest project, Divvi Up, brought on our first two subscribers in 2023. Horizontal, a small international nonprofit serving Human Rights Defenders, will be collecting privacy-preserving telemetry metrics about the users of their Tella app, which people use to document human rights violations. Mozilla is using Divvi Up to gain insight into aspects of user behavior in the Firefox browser. It took a combination of focus and determination to get us to a production-ready state and our technical lead, Brandon Pitman played a big role in getting us there.

We hired Kristin Berdan to fill a new role as General Counsel and her impact is already apparent within our organization. She joins Sarah Heil, our CFO, Josh, and me in ISRG leadership.

Collectively, we operate three impactful and growing projects for $7 million a year. This is possible because of the amazing leadership assembled across our teams and the ongoing commitment from our community to validate the usefulness of our work. As we look toward 2024 and the challenges and opportunities that face us, I ask that you join us in building a more secure and privacy respecting Internet by sponsoring us, making a donation or gift through your DAF, or sharing with the folks you know why security and privacy matter to them.

Support Our Work

ISRG is a 501(c)(3) nonprofit organization that is 100% supported through the generosity of those who share our vision for ubiquitous, open Internet security. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.

Our role in supporting the nonprofit ecosystem

13 december 2023 - 1:00am

For more than ten years, we at the nonprofit Internet Security Research Group (ISRG) have been focused on our mission of building a more secure and privacy-respecting Internet for everyone, everywhere. As we touch on in our 2023 Annual Report, we now serve more than 360 million domains with free TLS certificates.

Beyond being a big number, what does that signify? What’s the importance of having TLS being widely adopted anyways? We’ll take a closer look at these questions through the lens of one group of Subscribers we can relate to particularly well: nonprofits.

Serving .org at Internet scale

Let’s Encrypt serves 51% of all websites using the .org top level domain (TLD), which is commonly used by nonprofits. In the US alone there are 1.8M registered nonprofit organizations. And while the focus of these organizations are varied, all of them rely on the Internet in some capacity.

When a nonprofit uses a TLS certificate on their website, it protects their visitors and stakeholders from snoopers, MITM attacks, and surveillance. Without TLS, nonprofits' content could be changed without their knowledge or their visitors' private information could be compromised. Access to free and automated TLS via Let’s Encrypt means these nonprofits face as few barriers as possible to adopting TLS.

In short, something as fundamental as security and privacy should be as easy to access as possible. For nonprofits both large and small, Let’s Encrypt makes it easy to provide security and privacy for users of their websites, enabling them to remain focused on their missions.

Zooming in on four nonprofits we serve

The American Civil Liberties Union (ACLU) uses Let’s Encrypt as it works to realize its focus of being a “guardian of liberty” for US citizens. Using Let’s Encrypt protects ACLU’s constituents when they’re trying to know their rights or take action. With more than 4 million page views per month, ACLU’s website is a critical part of their mission.

Human Rights Watch (HRW) is an international nonprofit organization. With more than 500 individuals on staff around the world, HRW’s website is a trove of information empowering individuals and organizations alike to be informed and take action with a global perspective. Nearly 70% of HRW’s web traffic comes from people outside of the United States; that’s millions of page views per month secured by Let’s Encrypt—and by extension, millions of people around the world benefitting from a more secure and privacy-respecting Web.

The Center for Democracy & Technology (CDT) uses Let’s Encrypt to advance its mission to promote democratic values by shaping technology policy and architecture, with a focus on the rights of the individual. The CDT website offers updated and insightful information into the ways policy and innovation impact the digital space. Without a TLS certificate, the content of these pages could be intercepted and changed. What’s more, for those looking to financially support CDT, using TLS on their donation page encrypts the transaction protecting user details such as credit card and other personal information. Mallory Knodel, CTO at CDT and longtime digital rights defender and advocate commented, “Billions of people in over 60 countries access the internet with less censorship and surveillance because Let’s Encrypt hastened the adoption of web security measures by making certificates easy to obtain.”

Serving philanthropic foundations

In the United States, the work of nonprofits is made possible in large part through philanthropic foundations and organizations. When it comes to philanthropy’s web presence, Let’s Encrypt is there, too.

We provide TLS to billion dollar philanthropic organizations like the Hewlett Foundation, the Silicon Valley Community Foundation, and many others. Taking a look at the top 50 philanthropic organizations around the world, Let’s Encrypt serves 36% of them. For large philanthropies, their website is the primary tool they have to communicate their focus areas for future funding as well as the impact they’ve made with past giving.

One of the leading philanthropists in the US, Craig Newmark, uses Let’s Encrypt and Digital Ocean for his website, craig newmark philanthropies. Commenting on our work, Craig recently shared, “The people at ISRG have been helping protect the Internet for over ten years, and continue to protect us all. They’re a necessary part of Cyber Civil Defense and national security.”

Overall, while Let’s Encrypt aims to build a better Internet, we’re particularly proud that our impact protects those seeking to build a better world.

Internet Security Research Group (ISRG) is the parent organization of Prossimo, Let’s Encrypt, and Divvi Up. ISRG is a 501(c)(3) nonprofit. If you’d like to support our work, please consider getting involved, donating, or encouraging your company to become a sponsor.

Increase your security governance with CAA

7 september 2023 - 2:00am

According to Cloudflare’s Merkle Town, 257,036 certificates are issued every hour. We at Let’s Encrypt are issuing close to 70% of those certs. Being a Certificate Authority that operates as a nonprofit for the public’s benefit means we are constantly considering how we can improve our Subscribers' experience and security. One simple innovation to do just that is by using CAA (Certificate Authority Authorization), and two CAA extensions for Account and Method Binding.

What is CAA?

CAA is a type of DNS record that allows site owners to specify which Certificate Authorities are allowed to issue certificates containing their domain names. Using CAA is a proactive way to ensure that your domain(s) and subdomain(s) are under your control—you’re able to add a layer of security to your DNS governance. (By contrast, Certificate Transparency (CT) logs are a reactive way to monitor your DNS governance—by publicly publishing certificates issued to domains, Subscribers can verify that their domain(s) are using the intended CA(s).)

How to create a CAA Record

We think CAA is important for every Subscriber, but it’s all the more important if you’re handling TLS at scale. This is particularly true if a team or multiple teams have access to your integration.

Account and Method Binding is another layer of CAA that can improve your security even further. Method binding allows Subscribers to limit the sets of domain control validation methods—DNS-01, HTTP-01, or TLS-ALPN-01— which can be used to demonstrate control over their domain. Account binding allows a Subscriber to limit issuance to a specific ACME account. For further technical details, review our community post or take a look at RFC 8657.

CAA Adoption

Famedly, a German healthcare company, set up CAA as part of updating their overall ACME setup. In hearing more about why they chose to turn on CAA now, the answer was simple: because it was easy to do and low hanging fruit to enhance their security.

“The biggest benefit of using CAA along with account and method binding is closing the DV loophole,” said Jan Christian Grünhage, Famedly’s Head of Infrastructure. “By using DNSSEC with DNS-01 challenges, we’ve got cryptographic signatures all the way through the stack.”

The team at Famedly set up CAA over the course of a few days. “The larger project was to transition our issuance to a single ACME account ID, so adopting CAA as part of that work only added marginal effort,” remarked Jan. “The added security benefit was absolutely worth the effort.”

Getting started with CAA

If you manage TLS at scale, consider adopting CAA and Account and Method Binding. To get started, review our documentation, RFC 8659 and RFC 8657, and check out the community forum for more from Subscribers who’ve set up or are using CAA.

As with anything with DNS, there are some potential hiccups to avoid. The most important to highlight is that CAA will always respect the CAA record closest to the domain name it is issuing a certificate for. For more on this, check out this section of the CAA documentation. You’ll also want to ensure that your DNS provider supports setting CAA records.

Thanks to Famedly

We’re grateful for Famedly taking the time to share with us more about their experience in setting up CAA. What’s more, Famedly financially supported ISRG this year as part of our tenth anniversary campaign.

As a project of the Internet Security Research Group (ISRG), 100% of the funding for Let’s Encrypt comes from contributions from our community of users and supporters. We depend on their support in order to provide our public benefit services. If your company or organization would like to sponsor Let’s Encrypt, please email us at sponsor@letsencrypt.org. If you or your organization can support us with a donation of any size, we ask that you consider a contribution.

Shortening the Let's Encrypt Chain of Trust

10 juli 2023 - 2:00am

When Let’s Encrypt first launched, we needed to ensure that our certificates were widely trusted. To that end, we arranged to have our intermediate certificates cross-signed by IdenTrust’s DST Root CA X3. This meant that all certificates issued by those intermediates would be trusted, even while our own ISRG Root X1 wasn’t yet. During subsequent years, our Root X1 became widely trusted on its own. 

Come late 2021, our cross-signed intermediates and DST Root CA X3 itself were expiring. And while all up-to-date browsers at that time trusted our root, over a third of Android devices were still running old versions of the OS which would suddenly stop trusting websites using our certificates. That breakage would have been too widespread, so we arranged for a new cross-sign – this time directly onto our root rather than our intermediates – which would outlive DST Root CA X3 itself. This stopgap allowed those old Android devices to continue trusting our certificates for three more years.

On September 30th, 2024, that cross-sign too will expire.

In the last three years, the percentage of Android devices which trust our ISRG Root X1 has risen from 66% to 93.9%. That percentage will increase further over the next year, especially as Android releases version 14, which has the ability to update its trust store without a full OS update. In addition, dropping the cross-sign will reduce the number of certificate bytes sent in a TLS handshake by over 40%. Finally, it will significantly reduce our operating costs, allowing us to focus our funding on continuing to improve your privacy and security.

For these reasons, we will not be getting a new cross-sign to extend compatibility any further.

The transition will roll out as follows:

  • On Thursday, Feb 8th, 2024, we will stop providing the cross-sign by default in requests made to our /acme/certificate API endpoint. For most Subscribers, this means that your ACME client will configure a chain which terminates at ISRG Root X1, and your webserver will begin providing this shorter chain in all TLS handshakes. The longer chain, terminating at the soon-to-expire cross-sign, will still be available as an alternate chain which you can configure your client to request.

  • On Thursday, June 6th, 2024, we will stop providing the longer cross-signed chain entirely. This is just over 90 days (the lifetime of one certificate) before the cross-sign expires, and we need to make sure subscribers have had at least one full issuance cycle to migrate off of the cross-signed chain.

  • On Monday, September 30th, 2024, the cross-signed certificate will expire. This should be a non-event for most people, as any client breakages should have occurred over the preceding six months.

Infographic of the distribution of installed Android versions, showing that 93.9% of the population is running Android 7.1 or above.

If you use Android 7.0 or earlier, you may need to take action to ensure you can still access websites secured by Let’s Encrypt certificates. We recommend installing and using Firefox Mobile, which uses its own trust store instead of the Android OS trust store, and therefore trusts ISRG Root X1.

If you are a site operator, you should keep an eye on your website usage statistics and active user-agent strings during Q2 and Q3 of 2024. If you see a sudden drop in visits from Android, it is likely because you have a significant population of users on Android 7.0 or earlier. We encourage you to provide the same advice to them as we provided above.

If you are an ACME client author, please make sure that your client correctly downloads and installs the certificate chain provided by our API during every certificate issuance, including renewals. Failure modes we have seen in the past include a) never downloading the chain at all and only serving the end-entity certificate; b) never downloading the chain and instead serving a hard-coded chain; and c) only downloading the chain at first issuance and not re-downloading during renewals. Please ensure that your client does not fall into any of these buckets.

We appreciate your understanding and support, both now and in the years to come as we provide safe and secure communication to everyone who uses the web. If you have any questions about this transition or any of the other work we do, please ask on our community forum.

We’d like to thank IdenTrust for their years of partnership. They played an important role in helping Let’s Encrypt get to where we are today and their willingness to arrange a stopgap cross sign in 2021 demonstrated a true commitment to creating a secure Web. 

We depend on contributions from our supporters in order to provide our services. If your company or organization can help our work by becoming a sponsor of Let’s Encrypt please email us at sponsor@letsencrypt.org. We ask that you make an individual contribution if it is within your means.

ISRG’s 10th Anniversary

24 mei 2023 - 2:00am
Celebrating 10 Years of ISRG

It’s hard to believe 10 years have passed since Eric Rescorla, Alex Halderman, Peter Eckersley and I founded ISRG as a nonprofit home for public benefit digital infrastructure. We had an ambitious vision, but we couldn’t have known then the extent to which that vision would become shared and leveraged by so much of the Internet.

Since its founding in 2013, ISRG’s Let’s Encrypt certificate authority has come to serve hundreds of millions of websites and protect just about everyone who uses the Web. Our Prossimo project has brought the urgent issue of memory safety to the fore, and Divvi Up is set to revolutionize the way apps collect metrics while preserving user privacy. I’ve tried to comprehend how much data about peoples' lives our work has and will protect, and tried even harder to comprehend what that means if one could quantify privacy. It’s simply beyond my ability.

Some of the highlights from the past ten years include:

All this wouldn’t be possible without our staff, community, donors, funders, and other partners, all of whom I’d like to thank wholeheartedly.

I feel so fortunate that we’ve been able to thrive. We’re fortunate primarily because great people got involved and funders stepped up, but there’s also just a bit of good fortune involved in any success story. The world is a complicated place, there is complex context that one can’t control around every effort. Despite our best efforts, fortune has a role to play in terms of the degree to which the context swirling around us helps or hinders. We have been fortunate in every sense of the word and for that I am grateful.

Our work is far from over. Each of our three projects has challenges and opportunities ahead.

For Let’s Encrypt, which is more critical than ever and relatively mature, our focus over the next few years will be on long-term sustainability. More and more people working with certificates can’t recall a time when Let’s Encrypt didn’t exist, and most people who benefit from our service don’t need to know it exists at all (by design!). Let’s Encrypt is just part of how the Internet works now, which is great for many reasons, but it also means it’s at risk of being taken for granted. We are making sure that doesn’t happen so we can keep Let’s Encrypt running reliably and make investments in its future.

Prossimo is making a huge amount of progress moving critical software infrastructure to memory safe code, from the Linux kernel to NTP, TLS, media codecs, and even sudo/su. We have two major challenges ahead of us here. The first is to raise the money we need to complete development work. The second is to get the safer software we’ve been building adopted widely. We feel pretty good about our plans but it’s not going to be easy. Things worth doing rarely are.

Divvi Up is exciting technology with a bright future. Our biggest challenge here, like most things involving cryptography, is to make it easy to use. We also need to make sure we can provide the service at a cost that will allow for widespread adoption, so we’ll be doing a lot of optimization. Our hope is that over the next decade we can make privacy respecting metrics the norm, just like we did for HTTPS.

The internet wasn’t built with security or privacy in mind, so there is a bountiful opportunity for us to improve its infrastructure. The Internet is also constantly growing and changing, so it is also our job to look into the future and prepare for the next set of threats and challenges as best we can.

Thanks to our supporters, we’ll continue adapting and responding to help ensure the Web is more secure long into the future. Please consider becoming a sponsor or making a donation in support of our work.

Improving Resiliency and Reliability for Let’s Encrypt with ARI

23 maart 2023 - 1:00am

The Let’s Encrypt team is excited to announce that ACME Renewal Information (ARI) is live in production! ARI makes it possible for our subscribers to handle certificate revocation and renewal as easily and automatically as the process of getting a certificate in the first place.

With ARI, Let’s Encrypt can signal to ACME clients when they should renew certificates. In the normal case of a certificate with a 90 day lifetime, ARI might signal for renewal at 60 days. If Let’s Encrypted needs to revoke a certificate for some reason, ARI can signal that renewal needs to happen prior to the revocation. This means that even in extenuating circumstances, renewal can happen in an entirely automated way without disrupting subscriber services.

Without ARI, an unexpected revocation event might mean that Let’s Encrypt would have to send emails to affected subscribers, maybe those emails are read in time to avoid a service disruption, maybe they aren’t, and engineers have to manually take action to trigger early renewals, possibly in the middle of the night. We can’t wait for ARI to make this scenario a thing of the past.

ARI has a couple of additional benefits for Let’s Encrypt and our subscribers. First, we can use ARI to help modulate renewals as needed to avoid load spikes on the Let’s Encrypt infrastructure (of course subscribers can still renew whenever they want or need, as ARI is merely a signal or suggestion). Second, ARI can be used to set subscribers up for success in terms of ideal renewal times in the event that Let’s Encrypt offers even shorter-lived certificates in the future.

ARI has been standardized in the IETF, a process that started with an email from Let’s Encrypt engineer Roland Shoemaker in March of 2020. In September of 2021 Let’s Encrypt engineer Aaron Gable submitted the first draft to the IETF’s ACME working group, and now ARI is in production. The next step is for ACME clients to start supporting ARI, a process we plan to help with as best we can in the coming months.

ARI is a huge step forward for agility and resiliency in the TLS certificate ecosystem and we’re excited to see it gain widespread adoption!

Supporting Let’s Encrypt

As a project of the Internet Security Research Group (ISRG), 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our public benefit services. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.

Thank you to our 2023 renewing sponsors

19 januari 2023 - 1:00am

At ISRG, we often say, “as a nonprofit, 100% of our funding comes from charitable contributions.” But what does that actually look like? For nearly a decade, the vast majority of our funding has come from sponsorships—in fact, more than $17 million dollars has been donated to ISRG since 2015. Looking to the year ahead, we wanted to take a moment to thank our renewing sponsors who, quite literally, make our work possible: 

With us since the beginning

In 2015, the work ISRG set out to do was viewed by many as audacious, if not incredible. Back then, SSL/TLS was used by less than 40% of page loads. Getting a certificate was costly and complicated. Our aim was to make access to SSL certificates easy, free, and automated. That same year nineteen sponsors came on board to help us realize this mission. Today, seventeen of those sponsors and grantmakers have stayed on board every year! Now in their eighth year as a sponsor, Hostpoint has renewed their support for 2023. Their Co-founder and CEO, Markus Gebert, commented: 

“We have supported Let’s Encrypt since the very beginning. It is very valuable and important that nowadays any website can be equipped with an SSL certificate free of charge.”

Our thanks to Akamai, Cisco, Mozilla, Google, OVHcloud, Internet Society, Shopify, Hostpoint, SiteGround, Cyon, IdenTrust, Vultr, Automattic, Electronic Frontier Foundation, infomaniak, PlanetHoster, and Discourse for their eight years of support.

Committed to a better Internet

We know that finding sponsorship dollars is often anything but a straightforward path. That’s why we approach sponsorship as an ongoing conversation, not a one-time transactional interaction. As a result, we’re proud that each year we see on average 80% of our sponsors renew their support. From large organizations with thousands of staff to one-person shops, our sponsors come in all shapes and sizes—but all share a common goal of helping to make our work happen. 

We are grateful to the 70 sponsors renewing their support for 2023 who combined provide close to 60% of our operating budget. Their continued support means we begin 2023 well on our way towards our fundraising need for the year. Shopify, a sponsor since 2015 has renewed their Gold sponsorship for 2023. Their Founder and CEO, Tobi Lütke, commented:

“Let’s Encrypt makes it easy for everyone to do the right thing to secure the Internet. We couldn’t be happier to give our support to such a great effort.”

Together, these organizations make the mission of ISRG, and its impact for billions of people around the world, possible. Powered by their support, we look forward to continuing to build a Web that works for everyone, everywhere.

Supporting Let’s Encrypt

As a project of the Internet Security Research Group (ISRG), 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our public benefit services. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.

A Look into the Engineering Culture at ISRG

12 januari 2023 - 1:00pm

Engineers design systems and processes to ensure high quality outcomes and solutions - what if the same lens could be used to build a workplace where these very same engineers can thrive? Many organizations toil on how to build an environment where employees are engaged, challenged, and happy with their workplace, and while ISRG is not immune to those challenges, we do implement a few distinctive practices that help mitigate some workplace difficulties. Because 68% of our staff are engineers, we will be focusing in this post on how we are building a workplace culture where engineers can thrive.

1. Aligning Growth Aspirations

It happens again and again: a solid engineer grows and moves up the ranks and is then promoted to team manager, where they are supposed to juggle individual contributions, technical oversight, and people management. Many times, the engineer may not even have growth aspirations in people management, and yet they are put in a position where they are expected to know how to manage people and do it well. This could lead to the employee feeling like they cannot do their job sufficiently, feelings of imposter syndrome, burnout, or unreasonable expectations for everyone else put in a similar position.

To address this issue, our engineering career ladder intentionally does not include management requirements. This enables engineers to continually grow as individual contributors without having to be forced into responsibilities they may not be interested in or skilled at.

Many of our Site Reliability Engineers (SREs) have a background in operations work. To support their growth, we run a job rotation cycle where SREs spend 12-18 months on our Developer team to foster coding, architecture, and design skills. Some extra benefits to this are the strengthening of mentorship amongst team members as well as the connection between the two teams for better alignment in priorities and understanding.

It is essential to support employees in their growth and goal setting consistently so that workforce planning can be done with the employees' best interests in mind. This is done through cultivating a psychologically safe environment where employees feel comfortable asking questions, making mistakes, and are encouraged to reflect and be open about their aspirations. Processes that help this along are regularly structured check-ins, performance reviews, blameless post-incident debriefs, and open feedback and communication with their peers and leaders.

2. Mitigate the Management SPOF (Single Point of Failure)

Every engineering team at ISRG is led by a Technical Lead and a People Manager. This separation of technical and people oversight allows for the work of leading an engineering team to be broken up so that it is not all resting on one person. The Technical Lead can focus on being in charge of the technical viability, structures, and processes while the People Manager can focus on things such as individual and team goal setting, growth opportunities, and conflict resolution.

The Technical Lead and People Manager come together when it comes to process development, visibility, and recognition. They also work together to address things for each other while not playing the other’s role, thus mitigating the “who manages the manager” quandary. There are more instances where collaboration is needed between the two positions and that crossover lends to more perspective and opinion on what could be a complex issue.

3. Intentional Scalability

It is easy to dive straight into action items and deadlines, and then before you know it, things are rapidly scaling in efforts to keep up. The analogy of “building the plane while it’s flying” comes to mind. Later down the line those scaled systems show flaws that are far more difficult to repair.

Much like designing a reliable and scalable engineering system, our goal is to create a workforce system that can handle increases in load while maintaining effective performance without redesigning the whole thing or “rebuilding the plane.”

Our dual leadership approach sets up our management with increased load and changing priorities in mind. Both people have more wiggle room to anticipate and adjust. It may seem superfluous to have Engineering People Managers in a small organization, however this prepares for future growth with a relatively lean solution without the extra complexity.

Like all scalable solutions, there is the upfront investment of time and money. However, the benefits will far outweigh the costs in the long run since building on scalable systems is typically less expensive than trying to adapt or redesign less agile systems.

While reflecting on our engineering workplace systems and how they came to be, we recognized that many were organically built out of having a remote workplace, autonomous teams, and the driving values of flexibility and inclusion. We will continue to design practices with these things in mind.

All in all, when looked at with a holistic lens, building an engineering workplace culture has several considerations that are similar to those we focus on when designing software systems. The obvious difference is that instead of functions and data, we are dealing with actual people with feelings and ever changing wants and needs. That is why it is important to once again acknowledge that no two workplaces are the same and there are no perfect solutions, but we hope that these few points lead to thoughtful reflection on how organizations can improve their engineer workplace experience.

If this sounds like a culture you’d like to be a part of, check out our open jobs!

Supporting Let’s Encrypt

As a project of the Internet Security Research Group (ISRG), 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our public benefit services. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.

Let’s Encrypt improves how we manage OCSP responses

15 december 2022 - 1:00am

Let’s Encrypt has improved how we manage Online Certificate Status Protocol (OCSP) responses by deploying Redis and generating responses on-demand rather than pre-generating them, making us more reliable than ever.

About OCSP Responses

OCSP is used to communicate the revocation status of TLS certificates. When an ACME agent signs a request to revoke a certificate, our Let’s Encrypt Certificate Authority (CA) verifies whether or not the request is authorized and if it is, we begin publishing a ‘revoked’ OCSP response for that certificate. Each time a relying party, such as a browser, visits a domain with a Let’s Encrypt certificate, they can request information about whether the certificate has been revoked and we serve a reply containing ‘good’ or ‘revoked’, signed by our CA, which we call an OCSP response.

An Enormous OCSP Response Load: 100,000 Every Second

Let’s Encrypt currently serves over 300 million domains, which means we receive an enormous number of certificate revocation status requests — fielding around 100,000 OCSP responses every second!

Normally 98-99% of our OCSP responses are handled by our Content Delivery Network (CDN). But there are times when our CDN has an issue resulting in Let’s Encrypt being required to directly accept a larger number of requests. Historically, we could effectively respond to a maximum of 6% of our OCSP response traffic on our own. Should the need arise for us to accept much higher than that, some of our systems might begin to take too long to return results, return significant numbers of errors, or even stop accepting new requests. Not an ideal situation for us, or the Internet.

Our inability to serve OCSP responses during an issue with one of our CDNs could result in a slowdown in users' browsing speed or not being able to connect to a website — or worse, Internet users unintentionally visiting domains for which a certificate has been revoked. Browsers react differently to unresponsive OCSP, but one thing was clear, our systems needed to handle these occasions much better.

Increasing our Reliability

After working on this throughout most of 2022, our engineers have dramatically improved our ability to independently serve OCSP responses. We did that by deploying Redis as an in-memory caching layer that helps protect our database by absorbing traffic spikes, whether due to CDN issues or our own actions, such as CDN cache clearing.

Pivot in Design

Our team developed a system architecture design to organize/change all of the various interconnected systems needed to make Redis trusted to serve our OCSP responses. Amidst the fervor of developing this design, our engineers identified a resource we could depend upon more heavily to simplify the overall architecture and still realize incredible reliability gains. Rather than pre-signing OCSP status responses at regular intervals, storing the results in a relational database, and asking Redis to keep copies—we could keep simple but authoritative certificate status information in our database. We could then leverage fast, concurrent signing power from our HSMs to Just-in-Time sign a fresh OCSP response, cache it in Redis, and return it to the requester. Thanks to this, the demands on the relational database became much lighter (especially total table-writes and write-contention), the speed was impressive, and Redis wasn’t holding anything that couldn’t be (very very quickly) regenerated.

Testing our Systems

The first test was to directly accept 1/16 of the requests by dropping a segment of our CDN cache. In that initial test we handled ~12,500 requests per second. Successive tests ratcheted up to 1/8th CDN cache drop, then 1/4th, then 1/2, then a 100% cache drop. With each ratcheting up of the test load we were able to monitor and glean insights as to how our deployment could handle the traffic. In the final test of 100% of requests, our systems remained responsive. This means that if we experience a spike in the number of OCSP responses we need to accept moving forward, we are equipped to handle them, dramatically reducing the risks to Internet users.

Supporting Let’s Encrypt

As a project of the Internet Security Research Group (ISRG), 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our public benefit services. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.

A Year-End Letter from our Executive Director

5 december 2022 - 1:00pm

This letter was originally published in our 2022 annual report.

The past year at ISRG has been a great one and I couldn’t be more proud of our staff, community, funders, and other partners that made it happen. Let’s Encrypt continues to thrive, serving more websites around the world than ever before with excellent security and stability.

A particularly big moment was when Let’s Encrypt surpassed 300,000,000 websites served. When I was informed that we had reached that milestone, my first reaction was to be excited and happy about how many people we’ve been able to help. My second reaction, following on quickly after the first, was to take a deep breath and reflect on the magnitude of the responsibility we have here.

The way ISRG is translating that sense of responsibility to action today is probably best described as a focus on agility and resilience. We need to assume that, despite our best efforts trying to prevent issues, unexpected and unfortunate events will happen and we need to position ourselves to handle them.

Back in March of 2020 Let’s Encrypt needed to respond to a compliance incident that affected nearly three million certificates. That meant we needed to get our subscribers to renew those three million certificates in a very short period of time or the sites might have availability issues. We dealt with that incident pretty well considering the remediation options available, but it was clear that incremental improvements would not make enough of a difference for events like this in the future. We needed to introduce systems that would allow us to be significantly more agile and resilient going forward.

Since then we’ve developed a specification for automating certificate renewal signals so that our subscribers can handle revocation/renewal events as easily as they can get certificates in the first place (it just happens automatically in the background!). That specification is making its way through the IETF standards process so that the whole ecosystem can benefit, and we plan to deploy it in production at Let’s Encrypt shortly. Combined with other steps we’ve taken in order to more easily handle renewal traffic surges, Let’s Encrypt should be able to respond on a whole different level the next time we need to ask significant numbers of subscribers to renew early.

This kind of work on agility and resilience is critical if we’re going to improve security and privacy at scale on the Web.

Our Divvi Up team has made a huge amount of progress implementing a new service that will bring privacy respecting metrics to millions of people. Applications collect all kinds of metrics: some of them are sensitive, some of them aren’t, and some of them seem innocuous but could reveal private information about a person. We’re making it possible for apps to get aggregated, anonymized metrics that give insight at a population level while protecting the privacy of the people who are using those apps. Everybody wins - users get great privacy and apps get the metrics they need without handling individual user data. As we move into 2023, we’ll continue to grow our roster of beta testers and partners.

Our Prossimo project started in 2020 with a clear goal: move security sensitive software infrastructure to memory safe code. Since then, we’ve gotten a lot of code written to improve memory safety on the Internet.

We’re ending the year with Rust support being merged into the Linux kernel and the completion of a memory safe NTP client and server implementation. We’re thrilled about the potential for a more memory safe kernel, but now we need to see the development of drivers in Rust. We’re particularly excited about an NVMe driver that shows excellent initial performance metrics while coming with the benefit of never producing a memory safety bug. We are actively working to make similar progress on Rustls, a high-performance TLS library, and Trust-DNS, a fully recursive DNS resolver.

All of this is made possible by charitable contributions from people like you and organizations around the world. Since 2015, tens of thousands of people have given to our work. They’ve made a case for corporate sponsorship, given through their DAFs, or set up recurring donations, sometimes to give $3 a month. That’s all added up to $17M that we’ve used to change the Internet for nearly everyone using it. I hope you’ll join these people and support us financially if you can.

Remembering Peter Eckersley

12 september 2022 - 2:00am
Peter Eckersley Poster Artwork by Hugh D’Andrade

Peter Eckersley, a Let’s Encrypt co-founder, passed away unexpectedly on September 2nd from complications of cancer treatment. As an incredibly kind, bright, and energetic person, he was a beloved member of the community of people working to make the Internet a better place. He played an important role in the founding of Let’s Encrypt and his loss is felt deeply by many in our organization.

Peter met Alex Halderman at the RSA Conference in 2012 and the two of them started to make plans for technology to automate the process of acquiring HTTPS certificates. This work included early designs for what would become the ACME protocol. Peter and Alex later teamed up with a parallel effort by Josh Aas and Eric Rescorla at Mozilla, and the four of us worked together to create a new automated public benefit CA. The result was Let’s Encrypt, which began service in 2015.

Peter also led the development of the initial ACME client, which would eventually become Certbot. In a reflection of Peter’s vision for making the Internet secure by default, Certbot aims to fully automate HTTPS deployment, rather than simply procure a certificate. Today, Certbot is among the most popular ACME clients, and it is developed and maintained by Peter’s former team at the Electronic Frontier Foundation (EFF).

Peter was a member of our Board of Directors for several years. We greatly valued his contributions as a Director, but one of the memories from that time that makes us smile the most is Peter’s habit of showing up to board meetings with a messenger bag over his shoulder, helmet hair, and rosy cheeks from arriving by bike.

Making change at scale on the Internet is not easy. One way to get it done is to be both a dreamer and someone who possesses the deep technical knowledge necessary to bring dreams to reality. Peter was one of those people, and we’re grateful to have been able to work with him.

We hope to honor Peter’s life by letting the qualities we admired so much in him - his energy, optimism, kindness, and pursuit of knowledge - inspire our efforts going forward.

Peter's longtime friend and colleague Seth Schoen, who was among the earliest contributors to Let’s Encrypt and Certbot, further memorializes Peter in a post on our community forum.

A New Life for Certificate Revocation Lists

7 september 2022 - 2:00am

This month, Let’s Encrypt is turning on new infrastructure to support revoking certificates via Certificate Revocation Lists. Despite having been largely supplanted by the Online Certificate Status Protocol for over a decade now, CRLs are gaining new life with recent browser updates. By collecting and summarizing CRLs for their users, browsers are making reliable revocation of certificates a reality, improving both security and privacy on the web. Let’s talk about exactly what this new infrastructure does, and why it’s important.

A Brief History of Revocation

When a certificate becomes untrustworthy (for instance because its private key was compromised), that certificate must be revoked and that information publicized so that no one relies upon it in the future. However, it’s a well-worn adage in the world of the Web Public Key Infrastructure (the Web PKI) that revocation is broken. Over the history of the Web PKI, there have been two primary mechanisms for declaring that a TLS/SSL certificate should no longer be trusted: Certificate Revocation Lists (CRLs) and the Online Certificate Status Protocol (OCSP). Unfortunately, both have major drawbacks.

CRLs are basically just lists of all of the certificates that a given Certificate Authority (CA) has issued which have been revoked. This means that they’re often very large – easily the size of a whole movie. It’s inefficient for your browser to download a giant list of revoked certificates just to check if the single certificate for the site you’re visiting right now is revoked. These slow downloads and checks made web page loads slow, so OCSP was developed as an alternative.

OCSP is sort of like “what if there were a separate CRL for every single certificate”: when you want to check whether a given certificate has been revoked, your browser can check the status for just that one certificate by contacting the CA’s OCSP service. But because OCSP infrastructure has to be running constantly and can suffer downtime just like any other web service, most browsers treat getting no response at all as equivalent to getting a “not revoked” response. This means that attackers can prevent you from discovering that a certificate has been revoked simply by blocking all of your requests for OCSP information. To help reduce load on a CA’s OCSP services, OCSP responses are valid and can be cached for about a week. But this means that clients don’t retrieve updates very frequently, and often continue to trust certificates for a week after they’re revoked. And perhaps worst of all: because your browser makes an OCSP request for every website you visit, a malicious (or legally compelled) CA could track your browsing behavior by keeping track of what sites you request OCSP for.

So both of the existing solutions don’t really work: CRLs are so inefficient that most browsers don’t check them, and OCSP is so unreliable that most browsers don’t check it. We need something better.

Browser-Summarized CRLs

One possible solution that has been making headway recently is the idea of proprietary, browser-specific CRLs. Although different browsers are implementing this differently (e.g. Mozilla calls theirs CRLite, and Chrome’s are CRLSets), the basic idea is the same.

Rather than having each user’s browser download large CRLs when they want to check revocation, the browser vendor downloads the CRLs centrally. They process the CRLs into a smaller format such as a Bloom filter, then push the new compressed object to all of the installed browser instances using pre-existing rapid update mechanisms. Firefox, for example, is pushing updates as quickly as every 6 hours.

This means that browsers can download revocation lists ahead of time, keeping page loads fast and mitigating the worst problems of vanilla CRLs. It keeps revocation checks local, and the pushed updates can take immediate effect without waiting for a potentially days-long OCSP cache to expire, preventing all of the worst problems with OCSP.

Thanks to the promise of these browser-summarized CRLs, both the Apple and Mozilla root programs are requiring that all CAs begin issuing CRLs before October 1st, 2022. Specifically, they are requiring that CAs begin issuing one or more CRLs which together cover all certificates issued by that CA, and that the list of URLs pointing to those CRLs be disclosed in the Common CA Database (CCADB). This will allow Safari and Firefox to switch to using browser-summarized CRL checking for revocation.

Our New Infrastructure

When Let’s Encrypt was founded, we made an explicit decision to only support OCSP and not produce CRLs at all. This was because the root program requirements at the time only mandated OCSP, and maintaining both revocation mechanisms would have increased the number of places where a bug could lead to a compliance incident.

When we set out to develop CRL infrastructure, we knew we needed to build for scale, and do so in a way that reflects our emphasis on efficiency and simplicity. Over the last few months we have developed a few new pieces of infrastructure to enable us to publish CRLs in compliance with the upcoming requirements. Each component is lightweight, dedicated to doing a single task and doing it well, and will be able to scale well past our current needs.

Let’s Encrypt currently has over 200 million active certificates on any given day. If we had an incident where we needed to revoke every single one of those certificates at the same time, the resulting CRL would be over 8 gigabytes. In order to make things less unwieldy, we will be dividing our CRLs into 128 shards, each topping out at a worst-case maximum of 70 megabytes. We use some carefully constructed math to ensure that – as long as the number of shards doesn’t change – all certificates will remain within their same shards when the CRLs are re-issued, so that each shard can be treated as a mini-CRL with a consistent scope.

In line with the same best practices that we follow for our certificate issuance, all of our CRLs will be checked for compliance with RFC 5280 and the Baseline Requirements before they are signed by our issuing intermediates. Although the popular linting library zlint does not yet support linting CRLs, we have written our own collection of checks and hope to upstream them to zlint in the future. These checks will help prevent compliance incidents and ensure a seamless issuance and renewal cycle.

As part of developing these new capabilities, we have also made several improvements to the Go standard library’s implementation of CRL generation and parsing. We look forward to contributing more improvements as we and the rest of the Go community work with CRLs more frequently in the future.

Although we will be producing CRLs which cover all certificates that we issue, we will not be including those URLs in the CRL Distribution Point extension of our certificates. For now, as required by the Baseline Requirements, our certificates will continue to include an OCSP URL which can be used by anyone to obtain revocation information for each certificate. Our new CRL URLs will be disclosed only in CCADB, so that the Apple and Mozilla root programs can consume them without exposing them to potentially large download traffic from the rest of the internet at large.

The Future of Revocation

There’s still a long way to go before revocation in the Web PKI is truly fixed. The privacy concerns around OCSP will only be mitigated once all clients have stopped relying on it, and we still need to develop good ways for non-browser clients to reliably check revocation information.

We look forward to continuing to work with the rest of the Web PKI community to make revocation checking private, reliable, and efficient for everyone.

If you’re excited about our work developing more robust and private revocation mechanisms, you can support us with a donation, or encourage your company or organization to sponsor our work. As a nonprofit project, 100% of our funding comes from contributions from our community and supporters, and we depend on your support.

Nurturing Continued Growth of Our Oak CT Log

19 mei 2022 - 2:00am

Let’s Encrypt has been running a Certificate Transparency (CT) log since 2019 as part of our commitment to keeping the Web PKI ecosystem healthy. CT logs have become important infrastructure for an encrypted Web 1, but have a well-deserved reputation for being difficult to operate at high levels of trust: Only 6 organizations run logs that are currently considered to be “qualified.” 2

Our Oak log is the only qualified CT log that runs on an entirely open source stack 3. In the interest of lowering the barrier for other organizations to join the CT ecosystem, we want to cover a few recent changes to Oak that might be helpful to anyone else planning to launch a log based on Google’s Trillian backed by MariaDB:

  • The disk I/O workload of Trillian atop MariaDB is easily mediated by front-end rate limits, and

  • It’s worth the complexity to split each new annual CT log into its own Trillian/MariaDB stack.

This post will update some of the information from the previous post How Let’s Encrypt Runs CT Logs.

Growing Oak While Staying Open Source

Oak runs on a free and open source stack: Google’s Trillian data store, backed by MariaDB, running at Amazon Web Services (AWS) via Amazon’s Relational Database Service (RDS). To our knowledge, Oak is the only trusted CT log without closed-source components 3.

Open Source Stack

Other operators of Trillian have opted to use different databases which segment data differently, but the provided MySQL-compatible datastore has successfully kept up with Let’s Encrypt’s CT log volume (currently above 400 GB per month). The story for scaling Oak atop MariaDB is quite typical for any relational database, though the performance requirements are stringent.

Keeping Oak Qualified

The policies that Certificate Transparency Log operators follow require there to be no significant downtime, in addition to the more absolute and difficult requirement that the logs themselves make no mistakes: Given the append-only nature of Certificate Transparency, seemingly minor data corruption prompts permanent disqualification of the log 4. To minimize the impacts of corruption, as well as for scalability reasons, it’s become normal for CT logs to distribute the certificates they contain in different, smaller individual CT logs, called shards.

Splitting Many Years Of Data Among Many Trees

The Let’s Encrypt Oak CT log is actually made up of many individual CT log shards each named after a period of time: Oak 2020 contains certificates which expired in 2020; Oak 2022 contains certificates which expire in 2022. For ease of reference, we refer to these as “temporal log shards,” though in truth each is an individual CT log sharing the Oak family name.

It is straightforward to configure a single Trillian installation to support multiple CT log shards. Each log shard is allocated storage within the backing database, and the Trillian Log Server can then service requests for all configured logs.

The Trillian database schema is quite compact and easy to understand:

  • Each configured log gets a Tree ID, with metadata in several tables.

  • All log entries – certificates in our case – get a row in LeafData.

  • Entries that haven’t been sequenced yet get a row in the table Unsequenced, which is normally kept empty by the Trillian Log Signer service.

  • Once sequenced, entries are removed from the Unsequenced table and added as a row in SequencedLeafData.

Database Layout

In a nutshell: No matter how many different certificate transparency trees and subtrees you set up for a given copy of Trillian, all of them will store the lion’s share of their data, particularly the DER-encoded certificates themselves, interwoven into the one LeafData table. Since Trillian Log Server can only be configured with a single MySQL connection URI, limiting it to a single database, that single table can get quite big.

For Oak, the database currently grows at a rate of about 400 GB per month; that rate is ever-increasing as the use of TLS grows and more Certificate Authorities submit their certificates to our logs.

Amazon RDS Size Limitations

In March 2021 we discovered that Amazon RDS has a 16TB limit per tablespace when RDS is configured to use one file-per-table, as we were doing for all of our CT log shards. Luckily, we reached this limit first in our testing environment, the Testflume log.

Part of Testflume’s purpose was to grow ahead of the production logs in total size, as well as test growth with more aggressive configuration options than the production Oak log had, and in these ways it was highly successful.

Revisiting Database Design

In our blog post, How Let’s Encrypt Runs CT Logs, we wrote that each year we planned “to freeze the previous year’s shard and move it to a less expensive serving infrastructure, reclaiming its storage for our live shards.” However, that is not practical while continuing to serve traffic from the same database instance. Deleting terabytes of rows from an InnoDB table that is in-use is not feasible. Trillian’s MySQL-compatible storage backend agrees: as implemented, Trillian’s built-in Tree Deletion mechanism marks a tree as “soft deleted," and leaves the removal of data from the LeafData table (and others) as an exercise for the administrator.

Since Trillian’s MySQL-compatible backend does not support splitting the LeafData among multiple tables by itself, and since deleting stale data from those tables yields slow performance across the whole database server, to continue to scale the Oak CT log we have to instead prune out the prior seasons' data another way.

Single RDS Instance with Distinct Schema per Log Shard

We considered adding new database schemas to our existing MariaDB-backed Amazon RDS instance. In this design, we would run a Trillian CT Front-End (CTFE) instance per temporal log shard, each pointing to individual Trillian Log Server and Log Signer instances, which themselves point to a specific temporally-identified database schema name and tablespace. This is cost-effective, and it gives us ample room to avoid the 16 TB limit.

Distinct Schema per Log Shard in a Single Database

However, if heavy maintenance is required on any part of the underlying database, it would affect every log shard contained within. In particular, we know from using MariaDB with InnoDB inside the Let’s Encrypt CA infrastructure that truncating and deleting a multi-terabyte table causes performance issues for the whole database while the operation runs. Inside the CA infrastructure we mitigate that performance issue by deleting table data only on database replicas; this is more complicated in a more hands-off managed hosting environment like RDS.

Since we wish to clear out old data regularly as a matter of data hygiene, and the performance requirements for a CT log are strict, this option wasn’t feasible.

Distinct RDS Instance per Log Shard

While it increases the number of managed system components, it is much cleaner to give each temporal log shard its own database instance. Like the Distinct Schema per Log Shard model, we now run Trillian CTFE, Log Server, and Log Signer instances for each temporal log shard. However, each log shard gets its own RDS instance for the active life of the log 5. At log shutdown, the RDS instance is simply deprovisioned.

Using Distinct Databases Per Log

With the original specifications for the Oak log, this would require allocating a significant amount of data I/O resources. However, years of experience running the Testflume log showed that Trillian in AWS did not require the highest possible disk performance.

Tuning IOPS

We launched Oak using the highest performance AWS Elastic Block Storage available at the time: Provisioned IOPS SSDs (type io1). Because of the strict performance requirements on CT logs, we worried that without the best possible performance for disk I/O that latency issues might crop up that could lead to disqualification. As we called out in our blog post How Let’s Encrypt Runs CT Logs, we hoped that we could use a simpler storage type in the future.

To test that, we used General Purpose SSD storage type (type gp2) for our testing CT log, Testflume, and obtained nominal results over the lifespan of the log. In practice higher performance was unnecessary because Trillian makes good use of database indices. Downloading the whole log tree from the first leaf entry is the most significant demand of disk I/O, and that manner of operation is easily managed via rate limits at the load balancer layer.

Our 2022 and 2023 Oak shards now use type gp2 storage and are performing well.

Synergistically, the earlier change to run a distinct RDS instance for each temporal log shard has also further reduced Trillian’s I/O load: A larger percentage of the trimmed-down data fits in MariaDB’s in-memory buffer pool.

More Future Improvements

It’s clear that CT logs will continue to accelerate their rate of growth. Eventually, if we remain on this architecture, even a single year’s CT log will exceed the 16 TB table size limit. In advance of that, we’ll have to take further actions. Some of those might be:

  • Change our temporal log sharding strategy to shorter-than-year intervals, perhaps every 3 or 6 months.

  • Reduce the absolute storage requirements for Trillian’s MySQL-compatible storage backend by de-duplicating intermediate certificates.

  • Contribute a patch to add table sharding to Trillian’s MySQL-compatible storage backend.

  • Change storage backends entirely, perhaps to a sharding-aware middleware, or another more horizontally-scalable open-source system.

We’ve also uprooted our current Testflume CT log and brought online a replacement which we’ve named Sapling. As before, this test-only log will evaluate more aggressive configurations that might bear fruit in the future.

As Always, Scaling Data Is The Hard Part

Though the performance requirements for CT logs are strict, the bulk of the scalability difficulty has to do with the large amount of data and the high and ever-increasing rate of growth; this is the way of relational databases. Horizontal scaling continues to be the solution, and is straightforward to apply to the open source Trillian and MariaDB stack.

Supporting Let’s Encrypt

As a nonprofit project, 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our services for the public benefit. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.


  1. Chrome and Safari check that certificates include evidence that certificates were submitted to CT logs. If a certificate is lacking that evidence, it won’t be trusted. https://certificate.transparency.dev/useragents/ ↩︎

  2. As of publication, these organizations have logs Google Chrome considers qualified for Certificate Authorities to embed their signed timestamps: Cloudflare, DigiCert, Google, Let’s Encrypt, Sectigo, and TrustAsia. https://ct.cloudflare.com/logs and https://twitter.com/__agwa/status/1527407151660122114 ↩︎

  3. DigiCert’s Yeti CT log deployment at AWS uses a custom Apache Cassandra backend; Oak is the only production log using the Trillian project’s MySQL-compatible backend. SSLMate maintains a list of known log software at https://sslmate.com/labs/ct_ecosystem/ecosystem.html ↩︎

  4. In the recent past, a cosmic ray event led to the disqualification of a CT log. Andrew Ayer has a good discussion of this in his post “How Certificate Transparency Logs Fail and Why It’s OK” https://www.agwa.name/blog/post/how_ct_logs_fail, which references the discovery on the ct-policy list https://groups.google.com/a/chromium.org/g/ct-policy/c/PCkKU357M2Q/m/xbxgEXWbAQAJ↩︎

  5. Logs remain online for a period after they stop accepting new entries to give a grace period for mirrors and archive activity. ↩︎

Nurturing Continued Growth of Our Oak CT Log

19 mei 2022 - 2:00am

Let’s Encrypt has been running a Certificate Transparency (CT) log since 2019 as part of our commitment to keeping the Web PKI ecosystem healthy. CT logs have become important infrastructure for an encrypted Web 1, but have a well-deserved reputation for being difficult to operate at high levels of trust: Only 5 organizations run logs that are currently considered to be “qualified.” 2

Our Oak log is the only qualified CT log that runs on an entirely open source stack 3. In the interest of lowering the barrier for other organizations to join the CT ecosystem, we want to cover a few recent changes to Oak that might be helpful to anyone else planning to launch a log based on Google’s Trillian backed by MariaDB:

  • The disk I/O workload of Trillian atop MariaDB is easily mediated by front-end rate limits, and

  • It’s worth the complexity to split each new annual CT log into its own Trillian/MariaDB stack.

This post will update some of the information from the previous post How Let’s Encrypt Runs CT Logs.

Growing Oak While Staying Open Source

Oak runs on a free and open source stack: Google’s Trillian data store, backed by MariaDB, running at Amazon Web Services (AWS) via Amazon’s Relational Database Service (RDS). To our knowledge, Oak is the only trusted CT log without closed-source components 3.

Open Source Stack

Other operators of Trillian have opted to use different databases which segment data differently, but the provided MySQL-compatible datastore has successfully kept up with Let’s Encrypt’s CT log volume (currently above 400 GB per month). The story for scaling Oak atop MariaDB is quite typical for any relational database, though the performance requirements are stringent.

Keeping Oak Qualified

The policies that Certificate Transparency Log operators follow require there to be no significant downtime, in addition to the more absolute and difficult requirement that the logs themselves make no mistakes: Given the append-only nature of Certificate Transparency, seemingly minor data corruption prompts permanent disqualification of the log 4. To minimize the impacts of corruption, as well as for scalability reasons, it’s become normal for CT logs to distribute the certificates they contain in different, smaller individual CT logs, called shards.

Splitting Many Years Of Data Among Many Trees

The Let’s Encrypt Oak CT log is actually made up of many individual CT log shards each named after a period of time: Oak 2020 contains certificates which expired in 2020; Oak 2022 contains certificates which expire in 2022. For ease of reference, we refer to these as “temporal log shards,” though in truth each is an individual CT log sharing the Oak family name.

It is straightforward to configure a single Trillian installation to support multiple CT log shards. Each log shard is allocated storage within the backing database, and the Trillian Log Server can then service requests for all configured logs.

The Trillian database schema is quite compact and easy to understand:

  • Each configured log gets a Tree ID, with metadata in several tables.

  • All log entries – certificates in our case – get a row in LeafData.

  • Entries that haven’t been sequenced yet get a row in the table Unsequenced, which is normally kept empty by the Trillian Log Signer service.

  • Once sequenced, entries are removed from the Unsequenced table and added as a row in SequencedLeafData.

Database Layout

In a nutshell: No matter how many different certificate transparency trees and subtrees you set up for a given copy of Trillian, all of them will store the lion’s share of their data, particularly the DER-encoded certificates themselves, interwoven into the one LeafData table. Since Trillian Log Server can only be configured with a single MySQL connection URI, limiting it to a single database, that single table can get quite big.

For Oak, the database currently grows at a rate of about 400 GB per month; that rate is ever-increasing as the use of TLS grows and more Certificate Authorities submit their certificates to our logs.

Amazon RDS Size Limitations

In March 2021 we discovered that Amazon RDS has a 16TB limit per tablespace when RDS is configured to use one file-per-table, as we were doing for all of our CT log shards. Luckily, we reached this limit first in our testing environment, the Testflume log.

Part of Testflume’s purpose was to grow ahead of the production logs in total size, as well as test growth with more aggressive configuration options than the production Oak log had, and in these ways it was highly successful.

Revisiting Database Design

In our blog post, How Let’s Encrypt Runs CT Logs, we wrote that each year we planned “to freeze the previous year’s shard and move it to a less expensive serving infrastructure, reclaiming its storage for our live shards.” However, that is not practical while continuing to serve traffic from the same database instance. Deleting terabytes of rows from an InnoDB table that is in-use is not feasible. Trillian’s MySQL-compatible storage backend agrees: as implemented, Trillian’s built-in Tree Deletion mechanism marks a tree as “soft deleted," and leaves the removal of data from the LeafData table (and others) as an exercise for the administrator.

Since Trillian’s MySQL-compatible backend does not support splitting the LeafData among multiple tables by itself, and since deleting stale data from those tables yields slow performance across the whole database server, to continue to scale the Oak CT log we have to instead prune out the prior seasons' data another way.

Single RDS Instance with Distinct Schema per Log Shard

We considered adding new database schemas to our existing MariaDB-backed Amazon RDS instance. In this design, we would run a Trillian CT Front-End (CTFE) instance per temporal log shard, each pointing to individual Trillian Log Server and Log Signer instances, which themselves point to a specific temporally-identified database schema name and tablespace. This is cost-effective, and it gives us ample room to avoid the 16 TB limit.

Distinct Schema per Log Shard in a Single Database

However, if heavy maintenance is required on any part of the underlying database, it would affect every log shard contained within. In particular, we know from using MariaDB with InnoDB inside the Let’s Encrypt CA infrastructure that truncating and deleting a multi-terabyte table causes performance issues for the whole database while the operation runs. Inside the CA infrastructure we mitigate that performance issue by deleting table data only on database replicas; this is more complicated in a more hands-off managed hosting environment like RDS.

Since we wish to clear out old data regularly as a matter of data hygiene, and the performance requirements for a CT log are strict, this option wasn’t feasible.

Distinct RDS Instance per Log Shard

While it increases the number of managed system components, it is much cleaner to give each temporal log shard its own database instance. Like the Distinct Schema per Log Shard model, we now run Trillian CTFE, Log Server, and Log Signer instances for each temporal log shard. However, each log shard gets its own RDS instance for the active life of the log 5. At log shutdown, the RDS instance is simply deprovisioned.

Using Distinct Databases Per Log

With the original specifications for the Oak log, this would require allocating a significant amount of data I/O resources. However, years of experience running the Testflume log showed that Trillian in AWS did not require the highest possible disk performance.

Tuning IOPS

We launched Oak using the highest performance AWS Elastic Block Storage available at the time: Provisioned IOPS SSDs (type io1). Because of the strict performance requirements on CT logs, we worried that without the best possible performance for disk I/O that latency issues might crop up that could lead to disqualification. As we called out in our blog post How Let’s Encrypt Runs CT Logs, we hoped that we could use a simpler storage type in the future.

To test that, we used General Purpose SSD storage type (type gp2) for our testing CT log, Testflume, and obtained nominal results over the lifespan of the log. In practice higher performance was unnecessary because Trillian makes good use of database indices. Downloading the whole log tree from the first leaf entry is the most significant demand of disk I/O, and that manner of operation is easily managed via rate limits at the load balancer layer.

Our 2022 and 2023 Oak shards now use type gp2 storage and are performing well.

Synergistically, the earlier change to run a distinct RDS instance for each temporal log shard has also further reduced Trillian’s I/O load: A larger percentage of the trimmed-down data fits in MariaDB’s in-memory buffer pool.

More Future Improvements

It’s clear that CT logs will continue to accelerate their rate of growth. Eventually, if we remain on this architecture, even a single year’s CT log will exceed the 16 TB table size limit. In advance of that, we’ll have to take further actions. Some of those might be:

  • Change our temporal log sharding strategy to shorter-than-year intervals, perhaps every 3 or 6 months.

  • Reduce the absolute storage requirements for Trillian’s MySQL-compatible storage backend by de-duplicating intermediate certificates.

  • Contribute a patch to add table sharding to Trillian’s MySQL-compatible storage backend.

  • Change storage backends entirely, perhaps to a sharding-aware middleware, or another more horizontally-scalable open-source system.

We’ve also uprooted our current Testflume CT log and brought online a replacement which we’ve named Sapling. As before, this test-only log will evaluate more aggressive configurations that might bear fruit in the future.

As Always, Scaling Data Is The Hard Part

Though the performance requirements for CT logs are strict, the bulk of the scalability difficulty has to do with the large amount of data and the high and ever-increasing rate of growth; this is the way of relational databases. Horizontal scaling continues to be the solution, and is straightforward to apply to the open source Trillian and MariaDB stack.

Supporting Let’s Encrypt

As a nonprofit project, 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our services for the public benefit. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.


  1. Chrome and Safari check that certificates include evidence that certificates were submitted to CT logs. If a certificate is lacking that evidence, it won’t be trusted. https://certificate.transparency.dev/useragents/ ↩︎

  2. As of publication, these organizations have logs Google Chrome considers qualified for Certificate Authorities to embed their signed timestamps: Cloudflare, DigiCert, Google, Let’s Encrypt, and Sectigo. https://ct.cloudflare.com/logs ↩︎

  3. DigiCert’s Yeti CT log deployment at AWS uses a custom Apache Cassandra backend; Oak is the only production log using the Trillian project’s MySQL-compatible backend. SSLMate maintains a list of known log software at https://sslmate.com/labs/ct_ecosystem/ecosystem.html ↩︎

  4. In the recent past, a cosmic ray event led to the disqualification of a CT log. Andrew Ayer has a good discussion of this in his post “How Certificate Transparency Logs Fail and Why It’s OK” https://www.agwa.name/blog/post/how_ct_logs_fail, which references the discovery on the ct-policy list https://groups.google.com/a/chromium.org/g/ct-policy/c/PCkKU357M2Q/m/xbxgEXWbAQAJ↩︎

  5. Logs remain online for a period after they stop accepting new entries to give a grace period for mirrors and archive activity. ↩︎

TLS Beyond the Web: How MongoDB Uses Let’s Encrypt for Database-to-Application Security

28 april 2022 - 2:00am

Most of the time, people think about using Let’s Encrypt certificates to encrypt the communication between a website and server. But connections that need TLS are everywhere! In order for us to have an Internet that is 100% encrypted, we need to think beyond the website.

MongoDB’s managed multicloud database service, called Atlas, uses Let’s Encrypt certificates to secure the connection between customers' applications and MongoDB databases, and between service points inside the platform. We spoke with Kenn White, Security Principal at MongoDB, about how his team uses Let’s Encrypt certificates for over two million databases, across 200 datacenters and three cloud providers.

"Let’s Encrypt has become a core part of our infrastructure stack," said Kenn. Interestingly, our relationship didn’t start out that way. MongoDB became a financial sponsor of Let’s Encrypt years earlier simply to support our mission to pursue security and privacy. MongoDB Atlas began to take off and it became clear that TLS would continue to be a priority as they brought on customers like currency exchanges, treasury platforms and retail payment networks. "The whole notion of high automation and no human touch all really appealed to us," said Kenn of MongoDB’s decision to use Let’s Encrypt.

MongoDB’s diverse customer roster means they support a wide variety of languages, libraries, and operating systems. Subsequently, their monitoring is quite robust. Over the years, MongoDB has become a helpful resource for Let’s Encrypt engineers to identify edge case implementation bugs. Their ability to accurately identify issues early helps us respond efficiently; this is a benefit that ripples out across our diverse subscribers all over the Web.

The open sharing of information is a core part of how Let’s Encrypt operates. In fact, "transparency" is one of our key operating principles. The ability to see and understand how Let’s Encrypt is changing helped MongoDB gain trust and confidence in our operations. "I don’t think you can really put a price on the experience we’ve had working with the Let’s Encrypt engineering team," said Kenn. "One thing that I appreciate about Let’s Encrypt is that you’ve always been extremely transparent on your priorities and your roadmap vision. In terms of the technology and your telemetry, this is an evolution; where you are today is far better than where you were two years ago. And two years ago you were already head and shoulders above almost every peer in the industry."

Check out other blog posts in this series about how other large subscribers use Let’s Encrypt certificates.

TLS Simply and Automatically for Europe’s Largest Cloud Customers

Speed at scale: Let’s Encrypt serving Shopify’s 4.5 million domains

Supporting Let’s Encrypt

As a nonprofit project, 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our services for the public benefit. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.

Let’s Encrypt Receives the Levchin Prize for Real-World Cryptography

13 april 2022 - 2:00am

On April 13, 2022, the Real World Crypto steering committee presented the Max Levchin Prize for Real-World Cryptography to Let’s Encrypt. The following is the speech delivered by our Executive Director, Josh Aas upon receiving the award. We’d like to thank our community for supporting us and invite you to join us in making the Internet more secure and privacy-respecting for everyone.

Thank you to the Real World Crypto steering committee and to Max Levchin for this recognition. I couldn’t be more proud of what our team has accomplished since we started working on Let’s Encrypt back in 2013.

My first temptation is to name some names, but there are so many people who have given a significant portion of their lives to this work over the years that the list would be too long. You know who you are. I hope you’re as proud as I am at this moment.

Let’s Encrypt is currently used by more than 280 million websites, issuing between two and three million certificates per day. I often think about how we got here, looking for some nugget of wisdom that might be useful to others. I’m not sure I’ve really come up with anything particularly profound, but I’m going to give you my thoughts anyway. Generally speaking: we started with a pretty good idea, built a strong team, stayed focused on what’s important, and kept ease of use in mind every step of the way.

Let’s Encrypt ultimately came from a group of people thinking about a pretty daunting challenge. The billions of people living increasingly large portions of their lives online deserved better privacy and security, but in order to do that we needed to convince hundreds of millions of websites to switch to HTTPS. Not only did we want them to make that change, we wanted most of them to make the change within the next three to five years.

Levchin Prize Trophy

We thought through a lot of options but in the end we just didn’t see any other way than to build what became Let’s Encrypt. In hindsight building Let’s Encrypt seems like it was a good and rewarding idea, but at the time it was a frustrating conclusion in many ways. It’s not an easy solution to commit to. It meant standing up a new organization, hiring at least a dozen people, understanding a lot of details about how to operate a CA, building some fairly intense technical systems, and setting all of it up to operate for decades. Many of us wanted to work on this interesting problem for a bit, solve it or at least put a big dent in it, and then move on to other interesting problems. I don’t know about you, but I certainly didn’t dream about building and operating a CA when I was younger.

It needed to be done though, so we got to work. We built a great team that initially consisted of mostly volunteers and very few staff. Over time that ratio reversed itself such that most people working on Let’s Encrypt on a daily basis are staff, but we’re fortunate to continue to have a vibrant community of volunteers who do work ranging from translating our website and providing assistance on our community forums, to maintaining the dozens (maybe hundreds?) of client software options out there.

Today there are just 11 engineers working on Let’s Encrypt, as well as a small team handling fundraising, communication, and administrative tasks. That’s not a lot of people for an organization serving hundreds of millions of websites in every country on the globe, subject to a fairly intense set of industry rules, audits, and high expectations for security and reliability. The team is preparing to serve as many as 1 billion websites. When that day comes to pass the team will be larger, but probably not much larger. Efficiency is important to us, for a couple of reasons. The first is principle - we believe it’s our obligation to do the most good we can with every dollar entrusted to us. The second reason is necessity - it’s not easy to raise money, and we need to do our best to accomplish our mission with what’s available to us.

It probably doesn’t come as a surprise to anyone here at Real World Crypto that ease of use was critical to any success we’ve had in applying cryptography more widely. Let’s Encrypt has a fair amount of internal complexity, but we expose users to as little of that as possible. Ideally it’s a fully automated and forgettable background task even to the people running servers.

The fact that Let’s Encrypt is free is a huge factor in ease of use. It isn’t even about how much money people might be willing or able to pay, but any financial transaction requirement would make it impossible to fully automate our service. At some point someone would have to get a credit card and manage payment information. That task ranges in complexity from finding your wallet to obtaining corporate approval. The existence of a payment in any amount would also greatly limit our geographic availability because of sanctions and financial logistics.

All of these factors led to the decision to form ISRG, a nonprofit entity to support Let’s Encrypt. Our ability to provide this global, reliable service is all thanks to the people and companies who believe in TLS everywhere and have supported us financially. I’m so grateful to all of our contributors for helping us.

Our service is pretty easy to use under normal circumstances, but we’re not done yet. We can be better about handling exceptional circumstances such as large revocation events. Resiliency is good. Automated, smooth resiliency is even better. That’s why I’m so excited about the ACME Renewal Info work we’re doing in the IETF now, which will go into production over the next year.

Everyone here has heard it before, but I’ll say it again because we can’t afford to let it slip our minds. Ease of use is critical for widespread adoption of real world cryptography. As we look toward the future of ISRG, our new projects will have ease of use at their core. In fact, you can learn about our newest project related to privacy-preserving measurement at two of this afternoon’s sessions! Getting ease of use right is not just about the software though. It’s a sort of pas de trois, a dance for three, between software, legal, and finance, in order to achieve a great outcome.

Thank you again. This recognition means so much to us.

Supporting Let’s Encrypt

As a nonprofit project, 100% of our funding comes from contributions from our community of users and supporters. We depend on their support in order to provide our services for the public benefit. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. If you can support us with a donation, we ask that you make an individual contribution.

New Major Funding from the Ford Foundation

25 februari 2022 - 1:00am

ISRG's pragmatic, public-interest approach to Internet security has fundamentally changed the web at an astonishing scale and pace.

—Michael Brennan, Ford Foundation

The Internet has considerable potential to help build a more just, equitable, and sustainable world for all people. Yet for everyone online—and indeed the billions not yet online—barriers to secure and privacy-respecting communication remain pervasive.

ISRG was founded in 2013 to find and eliminate these barriers. Today, we’re proud to announce a $1M grant from the Ford Foundation to continue our efforts.

Our first project, Let’s Encrypt, leverages technology whose foundation has existed for nearly three decades—TLS certificates for securely communicating information via HTTP. Yet even for people well-versed in technology, adopting TLS proved daunting.

Before Let’s Encrypt, the growth rate for HTTPS page loads merely puttered along. As recently as 2013, just 25% of websites used HTTPS. In order for the Internet to reach its full potential, this glaring risk to peoples’ security and privacy needed to be mitigated.

Let’s Encrypt changed the paradigm. Today 81% of website page loads use HTTPS. That means that you and the other 4.9 billion people online can leverage the Internet for your own pursuits with a greater degree of security and privacy than ever before.

But TLS adoption was just one hurdle. Much can be done to further improve the Internet’s most critical pieces of technology to be more secure; much can be done to further improve the privacy of everyone using the Internet today.

Building our efforts thanks to transformational support

Ford Foundation’s commitment recognizes that the Internet can be a technological tool to build a more just, equitable, and sustainable world, but that it will take organizations like ISRG to help build it.

“Ford Foundation is one of the most respected grantmaking institutions in the world,” Josh Aas, ISRG Executive Director, said. “We are proud that Ford believes in the impact we’ve created and the potential of our efforts to continue benefiting everyone using the Internet.”

This support, which began in 2021, will help ISRG continue to invest in Let’s Encrypt and our other projects, Prossimo and Divvi Up.

Launched in late 2020, Prossimo intends to move the Internet's most critical security-sensitive software infrastructure to memory safe code. Society pays the price for these vulnerabilities with privacy violations, staggering financial losses, denial of public services (e.g., hospitals, power grids), and human rights violations. Meaningful effort will be required to bring about such change, but the Internet will be around for a long time. There is time for ambitious efforts to pay off.

Divvi Up is a system for privacy-preserving metrics analysis. With Divvi Up, organizations can analyze and share data to further their aims without sacrificing their users’ privacy. Divvi Up is currently used for COVID-19 Exposure Notification apps and has processed over 14 billion metrics to aid Public Health Authorities to better hone their app to be responsive to their local populations.

"ISRG's pragmatic, public-interest approach to Internet security has fundamentally changed the web at an astonishing scale and pace,” Michael Brennan of the Ford Foundation said. "I believe their new projects have the same potential and I am eager to see what they turn their sights to next."

We’re grateful to Ford for their support of our efforts, and to all of you who have contributed time and resources to our projects. For more information on ISRG and our projects, take a read through our 2021 Annual Report. 100% of ISRG’s funding comes from contributed sources. If you or your organization are interested in helping advance our mission, consider becoming a sponsor, making a one-time contribution, or reaching out with your idea on how you can help financially support our mission at sponsor@abetterinternet.org.

Pagina's