Well, we kicked off 2023 with a bang…AWS FEST is back again and with nearly 1700 registrants, it was our most anticipated session yet. This time we wanted to focus on all aspects of Storage optimization. As cloud storage spend is unexpectedly on the rise, there has been a lot of data being dug up and advice given on how to cut those costs. There’s also lots of easy-to-implement FinOps tools that keep popping up to identify and tackle storage inefficiencies. As information gets gathered on ‘errors’ that companies tend to make with their precious storage real estate, we knew it was important to put the focus on how these optimization tools and tricks can be implemented.
We hand picked 5 amazing guests to present on how to optimize price, performance, analytics automation, DR, retention, Lifecycling, Kubernetes efficiency and loads more.
Live attendees got a bunch of surprises in the form of scratch games, swag pack wins, tons of new magic and laughs, and as they are wont to do, sent through over 9,000 emojis throughout the event, miraculously not breaking the Airmeet platform.
If you didn’t make it, or just want a review, we’ve summarized below some key concrete (and easy!) action plans for your AWS Storage optimization:
Highlights from AWS FEST: Storage Edition – March 2023
How to Reduce Costs with S3 Storage Lens with Steph Gooch
Steph is hands down a favorite recurring guest of ours. As Senior Commercial Architect at AWS, her focus is helping customers manage utilization, spend and improve their financial processes.
Steph had a lot to say about AWS free tool S3 Storage Lens, an analytics tool found in your S3 console, providing organizational-wide visuals of your storage usage.
What are some quick optimization wins using Amazon S3 Storage Lens, according to Steph?
- Use the Cost Optimization metric category, you can pull up a list of recommendations for various metrics like object count, active buckets, etc. Some examples of implementing easy storage clean-up include:
- Multi variants of the same object that take up storage space that have been kept over time can be identified.
- When versioning is enabled (keeping multiple variants of an object in the same bucket), there could be a spike in non-current versions. You can drill down and see buckets causing excess spend.
- When uploading a single file, AWS will break up the file and upload in pieces (common with Lambda functions when uploading to S3). Customers still pay for the objects uploaded even if there is only a partially successful upload.
- Use the high level bucket view to understand the contents of your buckets. Logs or queries can get unknowingly stored indefinitely, taking up expensive Standard S3 real estate. These can either be deleted or moved to colder storage such as Infrequent Access or Glacier.
- Steph recommends setting up a Lifecycle policy on those objects or buckets that you have identified as being redundant to delete altogether or be moved to colder storage tiers.
Here’s Steph’s full session where she gives many other great tips on how to reduce storage spend. She also delves into real-life success stories (like $600 annual savings), what to implement for paid metrics and how Amazon S3 Lens can be used with other data.
Kubernetes Storage with Michael Levan
As an engineering consultant and AWS Community Builder in the Kubernetes space, Kevin spends his time working with global enterprises helping them with their cloud-native projects. Michael was the perfect expert to explain how companies can optimize all aspects of their Kubernetes storage (choosing a storage class, attached your persistent volume, etc).
What quick Kubernetes Storage optimization tools can you implement today, according to Michael?
- When you spin up an EKS cluster in AWS there aren’t default storage classes – this is done manually by enabling various CSI (container storage interface) add-ons.
- To get around the fact that this is not out-of-the box within AWS, use a quick code block and run a command to quickly turn on your EKS cluster plug-in and specify your storage class.
- Once you have your persistent volume (virtual hard drive) and your storage class where you are pulling your hard drive from, you’ll need to attach the volume to a Kubernetes pod (essentially this equivalent to your computer claiming your hard drive when attached). Because in the cloud you aren’t running out of storage (unlike an on-prem environment) we can always use a PersistentVolumeClaim (PVC) to create your persistent volume claim and attach it to your pod.
- To use a PVC, Kubernetes users apply it to a Kubernetes pod after defining the PVC configuration. Kubernetes finds the storage and attaches it for that pod.
Here’s Michael’s full session where he gives us many more optimization tips such as databases (connecting a MySQL container to a MySQL database running in RDS),using S3, EFS and EBS for EKS, more on stateful and stateless applications and why you will require persistent storage at some point for both of these.
Optimizing Amazon S3 Storage Classes with Noa Israel
Noa is a Cost Optimization and enablement Specialist at AWS and has a huge impact on AWS customer experience, in particular optimizing customers’ costs and account governance. She loves talking about her first hand experience with customers and brings helpful real-life examples of how she helped her customers save big on their AWS bill using Data Lifecycle Management tools.
What customer storage patterns does Noa frequently see and how does she advise them on efficient tiering / tool usage?
- Because access patterns for our data change over time, the first step is to understand your workload requirements. In her presentation Noa gives a great run down of every single storage class option, their related retrieval charge tradeoffs and recommended class to use based on your frequency of access, latency and resiliency requirements.
- Noa gives us real life examples on how to implement automated Lifecycle policies to transition objects between storage classes based on object age.
- Use Storage Class Analysis, a powerful tool to help you make data-driven Lifecycle decisions. With it you can easily assess patterns and automatically classify your data as frequently or infrequently accessed.
Be sure to watch Noa’s full session where she does a deep dive into S3 Intelligent Tiering and how it can save you up to 68% with new Archive Instant Access, the three options on how to move objects between storage classes, how customer-obsessed AWS actively implements feedback from their customers, and finally more real-life S3 optimization customer success stories.
AWS Storage Costs 101: Best Practices for Cost Optimization with Anthony Fiore
Anthony, another favorite recurring guest of ours, always brings his >25 years of industry experience and his wealth of knowledge on how AWS customers can efficiently plan, build migrate and optimize their storage solutions using both AWS native storage services as well as AWS Partner solutions like N2WS.
What are some edge cases of storage cost optimization and what AWS native tools as well as partner solutions does Anthony recommend?
- EC2 Compute Optimizer is a free machine-learning driven tool with recent enhancements that branch out from its original use which was to help customers avoid overprovisioning by recommending which EC2 instance to use. Compute Optimizer now also helps you with EBS volume recommendations (gp3 vs io2 vs st1, etc).
- The EBS volume details page will provide a list of recommendations including IOPS settings specifications such as optimal size, monthly price difference and performance risk.
- Anthony often see customers taking a final snapshot of a volume and thereby accumulating snapshot costs. For a use case where customers many final snapshots that don’t need aggressive recovery times, he recommends N2WS AnySnap Archiver.
- This free tool (can be accessed using N2WS Free Trial) provides instant cost savings by taking an EBS snapshot (whether it was created manually or using an AWS service) will read the data off the snapshot and automatically deposit it to an Amazon S3 bucket. You then have the choice to delete the original snapshot or keep it stored in 2 places.
- EFS Intelligent Tiering is a lesser known tool that Anthony highly recommends. He delves into how to create tiering rules, data transport charges as well as recent advances in driving down costs and latency for EFS.
Watch Anthony’s full session where he shares many more cost optimization conversations he has with customers, what his customers engage with for help with cost optimization as well as the hidden benefit of storage cost optimization.
Optimize your Storage and Cost for Disaster Recovery with Cynthia Santos
And finally, we simply had to bring on Cynthia, another recurring guest of ours as she is the go-to for all things optimization in Disaster Recovery. She talks with AWS customers daily showing them how to implement optimal multi-cloud DR solutions for both AWS and Azure.
Cynthia explains that your goal should be to backup your data often but optimize your storage costs. How do we do this?
- Use N2WS to get easy Lifecycle management by setting up specific retention periods with custom scheduling. Use one backup policy to Lifecycle all of your data. (N2WS can automatically store backups up to ANY S3 or Glacier tier).
- Typical use case that Cynthia provided: daily backups that are kept as EBS, weekly backups copied to S3 Standard, and monthly backup archived to Glacier.
- Get a global view of your backups using N2WS — view all your backups across different accounts,, resources and regions (great for MSPs). Cynthia recommends using Cost Explorer (get a real-time estimate of your backup costs) and Volume Usage Reports (track low/high EBS volume usage, receive alerts for over or under provisioned volumes) for making transparent data-driven decisions.
- Utilize N2WS’ flexible recovery options to recover files, folders, instances, volumes or entire data bases or file systems. Use versioning to easily select versions of the file you need, no need to provision an entire instance ore recreate an entire file system.
- N2WS feature VPC Capture & Clone allows you to restore effectively through connectivity — you must be able to recreate how your services connect in order to access your data.
Cynthia had many other great tips for backing up frequently with minimal RTO and maximum cost savings such as how to store directly to Glacier without having to store temporarily in S3, the easiest way to archive RDS to Glacier and why Glacier Instant Retrieval is a win-win for most backup processes. Definitely a must-watch session.
Our mentalist/comedian emcee Gidi Givneh revealing just one of many ‘surprises’ throughout AWS FEST to Anthony Fiore and host Jon Myer
There you have it. Our top highlights from AWS FEST: Storage Edition. We absolutely love getting the AWS community together to share valuable, timely knowledge and are already planning our next AWS FEST on May 3rd which will focus on Compute!
New to N2WS? You can start backing up your AWS and Azure environment within minutes, for absolutely free with N2WS Backup & Recovery 30-day Free Trial. Get access to enterprise features for 30 days including support for Amazon EC2, EBS, RDS, Redshift, Aurora, EFS, SAP Hana and DynamoDB. Trial automatically converts to our Free Edition + takes <14 minutes to launch, configure and backup. No credit card needed.