Why I Started Using Postgres (And You Might Too)

9 Comments

Back in 2017, I started designing a new application.

At that point, almost all of my database experience focused on Microsoft SQL Server and its variations – Azure SQL DB, Amazon RDS SQL Server, and the like. I’d made a really good living doing consulting and training on that platform, and I felt like I knew it inside out.

However, that meant I also knew its challenges:

  • Microsoft SQL Server was expensive – at $2,000 per CPU core for Standard Edition, which maxed out at 128GB RAM, and $7,000 for Enterprise Edition. Licensing a typical database server cost tens of thousands of dollars just for the software alone.
  • Microsoft wasn’t interested in cutting prices – in fact, just the opposite. They seemed to have their sights set on acquiring Oracle’s market share, so every new SQL Server release brought out new Oracle-ish features that were only available in the expensive Enterprise Edition. Meanwhile, open source databases kept adding more features for free.
  • Microsoft DBAs and support are expensive, too – if you needed to hire a Microsoft SQL Server database administrator, you couldn’t just hire somebody out of college. You’d need to compete with the same talent pool that big enterprises wanted to hire. Microsoft support quality kept getting worse, and I could get better support for free from the internet and from my peer network.
  • Most apps don’t need SQL Server’s capabilities – sure, it had a lot of enterprise-y stuff like transparent data encryption at the storage layer, reporting and analytics software, and ETL tooling, but most apps just need a place to store data. In fact, when I worked with my Microsoft clients, I encouraged them to think of the database as just a filing cabinet, nothing more or less, and to avoid using any platform-specific features.
  • The cloud replaces a lot of Enterprise Edition’s HA/DR capabilities – if you needed high availability and disaster recovery scattered across multiple data centers, plus the ability to read from those replicas, AWS could do that for you, even a decade ago. You didn’t have to pay licensing fees for each replica.

So when it came time to think about what database to use for my Software-as-a-Service, I picked Postgres – specifically, AWS Aurora Postgres.

There was one scary part: what would happen if we hit performance problems? At the time, I didn’t know jack about Postgres. How would I be able to solve those performance problems without throwing a ton of hardware at it and running our company bank accounts dry?

I gambled that I’d be able to learn how to do performance tuning quickly enough, to port my Microsoft skills over to Postgres, in time to head off issues.

There were some scary times pretty early on. The app exploded in popularity, and in no time, we were adding 2TB of data every month. Some of our performance problems were instantly familiar, recognizable, and within my wheelhouse, even though it was an entirely new (to me) database platform. Other problems were shocking and challenging, like issues we hit with Postgres’s mysterious vacuum processes.

I want to help you avoid those kinds of performance problems in your apps, so I’ve distilled the most important lessons down into my Fundamentals of Postgres Index Tuning and Fundamentals of Vacuum classes. If you find yourself doing Postgres work, check out my Fundamentals of Performance bundle, which also includes the Fundamentals of Python class.

I’m excited to help you conquer Postgres performance issues quickly!

Previous Post
Quick Comparison Review: DBeaver vs DataGrip vs Visual Studio Code
Next Post
Announcing the Box of Tricks v0.1 with check_indexes and drop_indexes

9 Comments. Leave new

Leave a Reply to Brent Ozar Cancel reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed