Encryption: At Rest & In Transit
Session 4.6 · ~5 min read
Two Threats, Two Solutions
Data faces two distinct categories of risk depending on where it sits at any given moment. When stored on disk, it is vulnerable to physical theft, unauthorized filesystem access, or a compromised backup. When moving between services over a network, it is vulnerable to interception, modification, or replay. These are fundamentally different attack surfaces, and they require fundamentally different protections.
Encryption at rest protects against physical theft. Encryption in transit protects against eavesdropping. You need both.
Skipping either one creates a gap. Encrypt data in transit but store it in plaintext, and a stolen hard drive exposes everything. Encrypt data at rest but transmit it over HTTP, and anyone on the network path can read it. Security is not a menu where you pick one item. It is a chain, and it breaks at the weakest link.
Encryption at Rest: Three Approaches
Cloud providers, AWS in particular, offer multiple server-side encryption (SSE) options for object storage. Each shifts the boundary of who manages the keys and who performs the encryption. The choice depends on your compliance requirements, operational complexity tolerance, and trust model.
SSE-S3: AWS-Managed Keys
SSE-S3 is the default. AWS generates, manages, and rotates encryption keys on your behalf. Each object is encrypted with a unique key, and that key is itself encrypted with a root key that AWS rotates regularly. You never see or touch any key material. The encryption and decryption happen transparently on S3's side.
This is the simplest option. No configuration beyond enabling it (which is now on by default for all new buckets). No additional cost. The trade-off: you have no control over key policies, no audit trail of key usage, and no ability to revoke access at the key level.
SSE-KMS: AWS KMS-Managed Keys
SSE-KMS uses AWS Key Management Service to manage encryption keys. You can use an AWS-managed KMS key or create your own customer-managed key (CMK). The critical difference from SSE-S3 is visibility and control. Every use of the key is logged in CloudTrail. You can define key policies that restrict which IAM principals can encrypt or decrypt. You can disable or schedule deletion of keys, effectively making data permanently inaccessible.
This matters for compliance. PCI-DSS, HIPAA, and SOC 2 auditors want to see who accessed encryption keys and when. SSE-KMS provides that audit trail. The cost: KMS API calls are not free, and high-throughput workloads can hit request limits.
Client-Side Encryption
With client-side encryption (CSE), you encrypt data before it ever reaches the cloud provider. AWS never sees plaintext data. The encryption and decryption logic lives in your application, using either the AWS Encryption SDK or your own implementation. You can still use KMS to manage the data encryption keys (CSE-KMS), or you can manage keys entirely on your own infrastructure.
This is the highest-trust model. Even a fully compromised AWS account cannot read your data without the encryption keys. It is also the most complex. Your application must handle encryption, key rotation, and the inevitable key management failures. If you lose the keys, the data is gone. No recovery.
Comparison
| Dimension | SSE-S3 | SSE-KMS | Client-Side (CSE) |
|---|---|---|---|
| Who encrypts | AWS (S3) | AWS (S3 + KMS) | Your application |
| Who manages keys | AWS (fully managed) | AWS KMS (you control policy) | You (or KMS for wrapping) |
| Key audit trail | No | Yes (CloudTrail) | You must implement |
| Granular key policy | No | Yes (IAM + key policy) | Yes (your responsibility) |
| Performance impact | Negligible | KMS API latency + rate limits | CPU cost in your app |
| Cost | Free | $1/key/month + API calls | Your compute + optional KMS |
| Compliance fit | Basic | PCI, HIPAA, SOC 2 | Maximum (data never exposed) |
| Complexity | Zero | Low | High |
Encryption in Transit: TLS
Transport Layer Security (TLS) is the protocol that makes HTTPS work. It provides three guarantees: confidentiality (nobody can read the data), integrity (nobody can modify the data without detection), and authentication (you are talking to who you think you are talking to).
TLS 1.3, defined in RFC 8446, is the current standard. It is faster and more secure than TLS 1.2. The handshake completes in one round trip instead of two. Older, insecure cipher suites are removed entirely. There is no negotiation of weak algorithms because the protocol simply does not offer them.
TLS 1.3 Handshake
The handshake establishes a shared secret between client and server without ever transmitting that secret over the wire. It uses Diffie-Hellman key exchange, where both parties contribute a public value and independently compute the same shared key.
The key improvement over TLS 1.2: the client sends its key share in the first message, before knowing which parameters the server will choose. This eliminates one full round trip. For a client 100ms away from the server, that saves 100ms on every new connection. At scale, across millions of connections, this adds up.
TLS 1.3 also supports 0-RTT resumption, where a client that has previously connected can send encrypted application data in its very first message. This is useful for performance but introduces replay risk, so it should only be used for idempotent requests.
mTLS: Mutual Authentication
Standard TLS authenticates only the server. The client verifies the server's certificate, but the server accepts any client. This is fine for public websites, where you want anyone to connect.
In a microservices architecture, you want more. Service A should only accept requests from Service B, not from a compromised pod or a rogue container. Mutual TLS (mTLS) solves this by requiring both sides to present and verify certificates.
In practice, mTLS is usually implemented through a service mesh like Istio or Linkerd. The mesh's sidecar proxies handle certificate issuance, rotation, and verification automatically. Application code does not need to change. The mesh infrastructure guarantees that every service-to-service call is both encrypted and authenticated.
Certificate Management
Certificates expire. If you do not rotate them before expiration, your services go down. This is not a theoretical risk. Major outages at large companies have been caused by expired certificates, including incidents at companies that should know better.
Automated certificate management is essential. Tools like Let's Encrypt (for public-facing certificates) and HashiCorp Vault or AWS Certificate Manager (for internal certificates) handle issuance and renewal without human intervention. The goal: no human should ever need to remember to renew a certificate.
Performance Implications
Encryption is not free. Every encrypted operation costs CPU cycles. But the costs are often smaller than people assume.
AES-256 encryption on modern hardware with AES-NI instructions adds roughly 1-2% CPU overhead for bulk data encryption. The bottleneck is rarely the symmetric encryption itself. It is the key exchange (asymmetric operations) during TLS handshake, the KMS API calls for SSE-KMS, and the additional network round trips.
For most systems, the right approach is to encrypt everything and optimize only where measurements show a real bottleneck. The alternative, leaving data unencrypted "for performance," creates a security debt that compounds over time and eventually comes due in the form of a breach.
Further Reading
- RFC 8446: The Transport Layer Security (TLS) Protocol Version 1.3 (IETF)
- Protecting Data with Server-Side Encryption (AWS Documentation)
- Understanding Amazon S3 Client-Side Encryption Options (AWS Storage Blog)
- A Detailed Look at RFC 8446 (TLS 1.3) (Cloudflare Blog)
- Mutual TLS: Securing Microservices in Service Mesh (The New Stack)
Assignment
You are designing a payment processing system. It stores credit card numbers in a database and transmits them between an API gateway, a payment service, and a card processor.
- What encryption-at-rest strategy do you choose for the stored card numbers? Why not SSE-S3?
- What encryption-in-transit strategy do you use between the API gateway and payment service? Between the payment service and the external card processor?
- Who holds the encryption keys? Draw a diagram showing key ownership at each layer.
- PCI-DSS requires that you can prove who accessed cardholder data and when. Which encryption option gives you that audit trail?