Mounting EFS from an EC2 instance in a different AWS account than the one that owns the file system is a common pattern in hub-and-spoke architectures, but it has gotchas. The EFS DNS name only resolves inside the owner's VPC, so cross-account mounts must use the mount target IP. The resource policy enforces TLS + org-scoped IAM, so plain mount -t nfs4 fails. You must use amazon-efs-utils with tls,iam. On RHEL 9, you'll likely need to pin to v1.36.0 because v2.0.0+ replaced stunnel with a Rust-based proxy that needs a newer rustc than RHEL 9's AppStream provides. And new EFS file systems need a one-time chown of the root directory because NFS root-squash blocks non-privileged writes by default. This post covers the full workflow: IAM setup, package install, cross-account mount, fstab persistence, and the admin-only POSIX bootstrap procedure.
The Setup
In this architecture, EFS is provisioned in a centralized shared services account, while the EC2 clients live in spoke accounts (production, non-production, etc.). All accounts are connected via Transit Gateway. The goal is to give application servers in spoke accounts a shared file system, in this case for Oracle binaries and patch staging.
| Traditional On-Prem (NFS Appliance) | AWS (EFS) | |
|---|---|---|
| Source | nfs.internal:/oracle_share |
fs-xxxxxxxxxxxxxxxxx:/ |
| Protocol | NFS v4.1 | NFS v4.1 |
| Authentication | LDAP / SSSD client-side | IAM (instance role) |
| Encryption in Transit | None / SMB-style | TLS (enforced by resource policy) |
| Capacity | Pre-provisioned | Elastic (8.0E logical) |
Why Cross-Account EFS Is Tricky
Same-account EFS mounts are easy. Install amazon-efs-utils, point at fs-xxxxx.efs.region.amazonaws.com, done. Cross-account introduces three friction points:
- DNS doesn't resolve. The EFS regional DNS name only resolves within the owner account's VPCs. Spoke accounts get NXDOMAIN.
- Plain NFS is blocked. A well-designed EFS resource policy denies non-TLS traffic and requires
aws:PrincipalOrgID. Vanillamount -t nfs4sends no IAM credentials, so the org condition can't match. Access denied. - POSIX permissions need bootstrapping. A new EFS root is owned by
root:rootwith mode0755. WithoutClientRootAccess(which you should not grant in steady state), no non-root user can write. Mount succeeds, writes fail silently.
Each of these has a clean solution, and they compose well once you understand the order of operations.
Prerequisites
- Transit Gateway connectivity from spoke VPC to the EFS owner VPC
- Security group on EFS mount targets allowing TCP 2049 from the spoke CIDR ranges
- EFS resource policy granting
ClientMount+ClientWritetoaws:PrincipalOrgIDwith TLS enforcement - EC2 instance role with EFS client permissions (covered below)
- EFS root POSIX ownership set to your application user (one-time bootstrap, covered in the appendix)
Step 1: IAM Permissions on the EC2 Instance Role
The instance role in the spoke account needs three EFS actions: ClientMount, ClientWrite, and DescribeMountTargets. An inline policy on the role is the simplest approach.
{
"Effect": "Allow",
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:DescribeMountTargets"
],
"Resource": "*"
}
I leave Resource: "*" while bootstrapping, then tighten to the specific EFS ARN once it's working. The wildcard is fine if your IAM perimeter is otherwise clean. The resource policy on the EFS itself is what actually enforces who can mount.
Note what's not in this list: ClientRootAccess. That's intentional. Only an admin needs it temporarily to chown the EFS root. Steady state runs on the three actions above.
Step 2: Install amazon-efs-utils (and the v2.0.0 RHEL 9 Trap)
On Amazon Linux 2 / AL2023, this is one line:
sudo yum install -y amazon-efs-utils
On RHEL 9 or Oracle Linux 9, the package usually isn't in the default repos, so you build from source. This is where v2.0.0 will bite you.
In October 2024, AWS released amazon-efs-utils v2.0.0, which replaced stunnel with a new Rust-based efs-proxy component. Building from source on RHEL 9 then requires rustc newer than what AppStream ships (1.88.0+). The Rust proxy is a performance optimization; it is not required for the tls,iam mount workflow. v1.36.0 is the last pure-Python release, and AWS specifically tagged it as adding RHEL 9 support.
The build is straightforward:
sudo yum install -y git rpm-build make
git clone https://github.com/aws/efs-utils
cd efs-utils
git checkout v1.36.0
sudo make rpm
sudo yum install -y build/amazon-efs-utils-*rpm
If you want the v2.0.0 perf optimizations on RHEL 9, you'll need to install Rust via rustup first. That's a workable path, but I haven't found the trade-off worth it for typical Oracle workloads. Run the numbers for your IO patterns.
Step 3: Mount the EFS (TLS + IAM)
Because the EFS DNS name doesn't resolve cross-account, you mount by mount target IP. Pick the IP that matches your EC2 instance's Availability Zone. Cross-AZ mount target traffic still works but you pay cross-AZ data transfer.
sudo mount -t efs \
-o tls,iam,mounttargetip=10.0.3.115 \
fs-xxxxxxxxxxxxxxxxx:/ /oracle/home
The two critical mount options:
| Option | What It Does |
|---|---|
tls |
Wraps the NFS connection in TLS via stunnel (or efs-proxy on v2.0.0+). Required because the resource policy denies non-TLS traffic. |
iam |
Signs the connection using the EC2 instance role's credentials. Required because the resource policy enforces aws:PrincipalOrgID. |
mounttargetip |
Bypasses DNS by pointing directly at the mount target ENI. Required for cross-account because the EFS DNS name only resolves in the owner's VPC. |
Hardcoding mount target IPs in fstab works but is fragile. If a mount target gets recreated, every fstab entry across every server breaks. A cleaner pattern is to create A records in your AD DNS (or Route 53 PHZ) like oracle-home-efs.internal.example.com pointing at the mount target IPs. Then every fstab references the DNS name, and recovery from a mount target change is a single DNS update.
Step 4: Verify the Mount
A few quick checks confirm the mount is working end-to-end:
# Mount is active and showing elastic capacity
df -h /oracle/home
# NFS version (should be 4.1)
nfsstat -m | grep oracle
# Write test as the application user (UID 201 in this example)
sudo -u oracle touch /oracle/home/test_file
ls -la /oracle/home/test_file
If the write fails with permission denied even though the mount succeeded, you've hit the POSIX bootstrap problem. Jump to the appendix.
Step 5: Make It Persistent (fstab)
For a reboot-safe mount:
fs-xxxxxxxxxxxxxxxxx:/ /oracle/home efs tls,iam,mounttargetip=10.0.3.115,_netdev 0 0
The _netdev flag is critical. It tells systemd not to attempt the mount until networking is up. Without it, you'll get boot-time mount failures that look mysterious until you remember it's a network mount.
Troubleshooting Patterns
| Symptom | Likely Cause |
|---|---|
| Mount hangs / times out | Security group on mount target doesn't allow TCP 2049 from your spoke CIDR, or TGW route is missing |
| Permission denied on mount | Missing tls,iam options, or EFS resource policy excludes your principal |
| Mount succeeds but writes fail | EFS root is still root:root, needs the one-time chown bootstrap |
| TLS mount fails | amazon-efs-mount-watchdog not running, or stunnel missing |
| DNS resolution fails on cross-account | Expected, use mounttargetip or an internal DNS CNAME |
The mount logs at /var/log/amazon/efs/mount.log are surprisingly readable when something goes wrong. Check there before chasing deeper rabbit holes.
Design Decisions to Consider
Centralized vs. Per-Account EFS
This architecture centralizes EFS in a shared services account. The alternative is one EFS per account, which is simpler operationally but loses the "single source of truth" benefit and multiplies your KMS keys, replication targets, and backup configs. For shared application binaries (Oracle Home, patch staging, license files), centralized usually wins. For workload-specific data, per-account often makes more sense.
Mount Target IP vs. Internal DNS
Hardcoded IPs are fine for a handful of servers but turn into a tax at scale. Internal DNS records (AD or Route 53 PHZ associated with all spoke VPCs) decouple the mount config from the mount target lifecycle. Pick based on how often you expect to redeploy the EFS.
Resource Policy Strictness
The resource policy in this setup is fairly tight: org-scoped + TLS-required. You could go further with aws:SourceVpce conditions or principal ARN restrictions. You could also relax it for development, just don't carry the relaxed version into production unintentionally.
v1.36.0 Forever, or Plan for v2.0.0?
Pinning to v1.36.0 is a deliberate trade-off: simpler builds, no Rust toolchain dependency, slightly older codebase. AWS will keep adding features to v2.0.0+ and eventually you'll need to plan a migration. For Oracle DB workloads, the v2.0.0 perf gains haven't justified the complexity yet for me. Re-evaluate annually.
DR Replica Strategy
EFS supports cross-region replication out of the box (read-only replica). Decide upfront whether your spoke clients should be able to mount the replica directly during DR, or whether you'll promote it first. The mount workflow is identical, the IP and FS-ID just change.
Admin Appendix: One-Time EFS Root POSIX Bootstrap
This is the part that catches most people the first time: a fresh EFS root is owned by root:root mode 0755. Because steady-state IAM doesn't grant ClientRootAccess, NFS root-squashes the client's root user to anonymous, which can't write to a root:root 0755 directory. Non-root local users (like oracle UID 201) fall into "others" with r-x only.
The fix is a temporary three-step elevation:
- Grant
ClientRootAccesson the EFS resource policy and on an EC2 instance's IAM role - Mount and chown the EFS root from that EC2 to your application owner (e.g.,
oracle:dba) - Revoke
ClientRootAccessfrom both the resource policy and the IAM role
Once chown'd, the new ownership persists across all future client mounts. Every future server just needs the standard three-action IAM policy. No admin elevation required.
The Elevation Dance, Briefly
Step A1: add elasticfilesystem:ClientRootAccess to the resource policy's allow statement (keep TLS deny intact):
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:ClientRootAccess"
]
Step A2: add the same action to the IAM role's inline policy on whichever EC2 you're using to do the chown.
Step A3: mount, chown, verify, unmount:
sudo mount -t efs -o tls,iam,mounttargetip=<ip> <fs-id>:/ /tmp/efs-bootstrap
sudo chown oracle:dba /tmp/efs-bootstrap
sudo chmod 755 /tmp/efs-bootstrap
sudo -u oracle touch /tmp/efs-bootstrap/test.txt && sudo -u oracle rm /tmp/efs-bootstrap/test.txt
sudo umount /tmp/efs-bootstrap
Steps A4 and A5: reverse the policy changes from A1 and A2, leaving only ClientMount, ClientWrite, and (on the IAM role) DescribeMountTargets.
Whenever you do this for a new EFS, write the date and the resulting POSIX owner into your runbook. Future operators will reach for this appendix and immediately want to know whether their EFS has already been bootstrapped or if they need to run the elevation dance themselves.
Steady-State Reference
After bootstrap, the boring per-server setup is just:
- Standard three-action IAM inline policy on the EC2 role
amazon-efs-utilsv1.36.0 installed (RHEL 9) or yum-installed (AL2/AL2023)- Mount point +
mount -t efs -o tls,iam,mounttargetip=<ip> - fstab line with
_netdev
That's it. No more admin tickets, no more elevation, no more chown. Each new server can self-serve once the EFS has been initialized once.