I didn’t expect my time at eGain to end with a “thanks for all your hard work, but due to structural changes we’re canning the entire UK engineering team,” but here we are.

I missed SQLBits last year. I’d been asked to relocate to California, visa approved and everything, only for it to stall indefinitely thanks to some classic corporate dithering. This year, with job security firmly in the ‘Uncertain’ column, I decided to keep things simple and stick to the free Saturday sessions.

But even the free day at SQLBits is never a wasted one. I’ve been eight times now, and every single one has been worth it. The sessions are always packed with insight, the crowd knows their stuff, and there’s something about swapping war stories in the hallway that feels a bit like therapy but with more lanyards.

This year also gave me the chance to catch up with Gautham Kamath, my partner in data-driven crime for 14 years. We spent the better part of a decade and a half architecting systems, chasing down edge cases, and occasionally breaking things just to see how they worked. If you’re building a team and want someone who combines technical depth with actual delivery (and a moustache that commands respect), you need to talk to him. Few people know how to navigate chaos and still ship clean code like Gautham.

I turned up expecting a solid day of sessions. What I got was a roadmap for SQL Server’s next chapter, a peek behind the Fabric curtain, and an entirely unexpected curiosity for PostgreSQL. Whether you’re still building SSIS packages or you’ve gone full lakehouse, SQLBits 2025 had something to say.

Talks I Attended at SQLBits 2025

With a free ticket and a recently cleared calendar, I figured I may as well attend everything I could squeeze in. Here’s what made it onto my schedule:

  • SQL Server 2025 Engine Deep Dive – Bob Ward & Erin Stellato
  • SQL Server 2025: Unleashing Next-Level Database Performance – Margarita Naumova
  • Storytelling & Power BI: Creating Power BI Reports that Connect with Different Audiences – Valerie Junk
  • Transform Your Business with Integrated Solutions Using SQL Database in Microsoft Fabric – Mark Pryce-Maher
  • Automating Star Schemas in Fabric Data Warehouse – Bob Duffy
  • Best Practices for Building a Data Warehouse in Microsoft Fabric – Mark Pryce-Maher
  • Learning PostgreSQL as a SQL Server User – Grant Fritchey

There was a fair bit of crossover, especially in the Fabric and SQL Server sessions, but each one brought something new to the table. Because of the crossover, I’m not going to discuss each session individually, instead I’ll talk about what I learnt.

Concurrency Without Pain: SQL Server 2025’s Optimised Locking

I’ve lost count of the times I’ve seen a session titled something like “what’s new in SQL Server,” only to find out that most of the “new” bits are just things Azure’s had for 18 months. This wasn’t that.

With Margarita Naumova, then Bob Ward and Erin Stellato, we got a deep dive into what SQL Server 2025 is actually doing under the hood, and a lot of it is geared towards one of the oldest problems in the game: locking.

Optimised Locking: End of Lock Escalation?

The headline feature here is Optimised Locking, a combination of engine enhancements that effectively remove the need for lock escalation in many cases. Bob demoed a familiar scenario: update 2,500 rows, and you get key locks. Update 10,000, and the engine escalates to a table lock, blocking everyone else. Classic.

But with Optimised Locking enabled (alongside Accelerated Database Recovery), those same updates hold a new kind of transaction-level lock. One that behaves like a logical wrapper around the changes, and crucially, doesn’t conflict with other operations targeting different rows. So instead of escalating to a table-level lock, SQL Server now… doesn’t panic. And other users don’t get blocked for no good reason.

Bob’s demo showed an update of 10,000 rows running without blocking a SELECT MAX(id) from a completely different part of the table. If you’ve ever been on call during a bulk update and wondered why a single read query caused a five-minute meltdown, this should make your ears perk up.

But There’s a Catch (Of Course)

Optimised Locking isn’t magic. It requires ADR (Accelerated Database Recovery) to be enabled first, and optionally works even better with Read Committed Snapshot Isolation (RCSI). ADR isn’t new, it was introduced in 2019, but a lot of shops still haven’t flipped that switch. I wanted to switch this on at eGain, but after some testing the DBA team wouldn’t allow it. Depending on your workload, enabling ADR isn’t always painless. You’ll want to test thoroughly. Bob and Erin stressed that more than once.

Margarita also made the point that while these features came from Azure, we don’t get Microsoft holding our hand on-prem. You enable it, you maintain it. No hidden ops team swooping in when your lock manager gets grumpy.

But if you do turn it on? The payoff looks real.

What It Means for You (and Me)

For developers, it means that we may see the end of the countless NOLOCK hints (often a disaster waiting to happen) just to avoid contention. For DBAs, it means fewer deadlocks to troubleshoot and less pressure to pre-emptively rewrite update logic.

For me? I’m genuinely tempted to spin up a test DB with ADR + Optimised Locking and see what happens to one of my data transformation sprocs. The kind of sproc that used to be described as “aggressively single-threaded” by someone I still owe a beer.

This feature doesn’t feel like fluff. It feels like Microsoft finally addressing one of SQL Server’s fundamental pain points. Not with yet another setting buried in sys.configurations, but with a change to the engine’s behaviour that makes everyone’s life better.

Finally Doing Something About tempdb

There’s a short list of things that make most SQL Server professionals sigh audibly when mentioned, and somewhere between “merge replication” and “sp_configure show advanced options” sits tempdb.

Microsoft have been quietly chipping away at tempdb pain points for a while now (multiple files by default, trace flags to stop allocation contention, etc) but SQL Server 2025 takes a proper swing at the problem with tempdb resource governance.

A Governor for tempdb (No, Not Resource Governor)

This isn’t an extension to Resource Governor. It’s something new. SQL Server 2025 lets you define policies at the session level to control tempdb usage (by size, by percentage, or by workload group).

And unlike in the past, this actually applies to all tempdb usage: user objects, internal objects, and version store, the whole lot.

Bob explained that if you’ve got rogue queries using temp tables like they’re going out of fashion (and let’s be honest, you probably do), you can now fence them off from ruining things for everyone else.

It’s not quite a sandbox, but it’s a solid step forward, and critically, it gives you the option to monitor and enforce limits before tempdb runs out of space and takes half the business with it.

Policies, Not Panic

You define tempdb usage policies with CREATE WORKLOAD GROUP, and tie them to a classifier function (just like Resource Governor). But these are dedicated to tempdb activity.

There’s also a new DMV to see who’s using what (sys.dm_db_tempdb_resource_stats), which might finally give DBAs an answer when asked “what’s actually filling up tempdb?”

Bob’s demo showed a real example of a query being automatically terminated for exceeding its tempdb budget, without impacting the rest of the workload. No server-wide slowdown, no out-of-space errors, no drama.

What It Means for You (and Me)

If you’ve ever had to deal with a runaway report chewing through tempdb during month-end, this could be a game-changer. It’s one of those features that’ll quietly save your weekend, without anyone ever knowing it existed.

For me, I’m already thinking about where this would’ve slotted into our eGain workloads, especially the ones where large reporting queries spilled to tempdb due to out of control customisations made for specific customers.

This doesn’t solve every problem with tempdb. But it finally gives us a way to control it without resorting to dark magic or nightly restarts.

Reports People Actually Use: Power BI Storytelling

If there’s one universal truth in data, it’s that nobody wants to open your report unless they have to. You can build the slickest visuals, add hover-over tooltips, layer on smart drillthroughs, but if it doesn’t speak to the person using it then it’s getting closed faster than you can say “click to expand.”

Valerie Junk’s session tackled this problem head-on, and instead of obsessing over DAX or bookmarks, she focused on something far more valuable: clarity.

Tell a Story, Don’t Just Paste a Chart

The pitch wasn’t that we all need to become design wizards. It’s that we need to build reports with audience, purpose, and action in mind. What question does this report answer? What decision does it help make? Who’s supposed to use it, and what do they already know?

If you’ve ever designed a dashboard that was “for everyone” and ended up pleasing no one, this probably stung a bit. (It did for me.)

The idea of data storytelling sometimes gets written off as marketing fluff, but Valerie framed it as structure: use storytelling techniques to organise data, highlight relevance, and guide users. In other words, don’t rely on the user to figure it out, make it obvious.

“Surprise, It’s a Drillthrough” is Not Good UX

One of my favourite lines from her talk. A lot of Power BI reports hide functionality behind obscure clicks or icons, and the moment someone says “oh, you just right-click here and then click there,” you’ve already lost 90% of your users.

Instead, she advocated for intuitive design. Where interactivity is clearly signposted, and nothing vital is hidden behind three layers of right-clicks. Treat the report like a user journey, not a data dump.

Positive Engagement Beats Forced Usage

Valerie also made the point that we should stop measuring success by “number of views.” A report that people are forced to open isn’t a success. A report they choose to use because it saves them time or helps them do their job better, that’s the goal.

The call to action here is clear: if you want your reports to be more than just pretty charts, design with empathy. Think like the user. Start with the decision, not the data.

What It Means for You (and Me)

It made me want to revisit a few of my own reports, especially the ones I built “because someone asked for the data” and not because anyone actually needed the insight.

There’s a real difference between building for show and building for utility. And as someone who’s spent years refining technical solutions, it’s easy to forget that the last mile (the bit where someone actually interacts with what you built) is where the battle is won or lost.

Fabric Reality Check: Simplicity, Scale and the Stuff That’s Still Missing

Fabric is slick. Fabric is fast. Fabric is… not SQL Server.

Between Mark Pryce-Maher’s sessions and Bob Duffy’s walkthroughs, one message was clear: Microsoft Fabric is a leap forward, but it’s not magic and it’s definitely not a drop-in replacement for your existing SQL Server stack.

What Fabric SQL Is (and What It Really Isn’t)

Fabric SQL might feel familiar at first glance. There’s a warehouse, you write SQL, results come back. So far, so good.

But scratch the surface and you start to notice the differences. Fabric doesn’t have MDFs or physical tables in the way we’re used to, it’s Delta Lake under the hood. It’s columnar, distributed, and built for analytics at scale. Great for reading large datasets. Less great if you’re hoping for tight transactional control.

There are no identity columns. No constraints. No temp tables. Some of that’s by design, some of it’s just not built yet.

Bob Duffy summed it up neatly: “Fabric is for writing analytics systems, not OLTP systems.” If you’re used to building monolithic stored procedures that transform data, apply logic, and serve it all in one pass, Fabric’s going to force a rethink.

The Speed and Simplicity Are Real

What you get in return is scale, simplicity, and integration. Creating a new warehouse in Fabric takes seconds. HA and DR are built in. Auto-scaling works. The Power BI integration is native. And if you stay within the lines, the performance is impressive.

Mark showed how you can stitch together pipelines with very little infrastructure overhead, so long as your logic lives in pipelines, notebooks, or external code. You don’t “live” in Fabric SQL the way you might in traditional SQL Server. It’s more of a destination for shaped data.

The Star Schema Still Matters

Bob Duffy focused on star schema automation. The tooling can help you with ingestion patterns, metadata capture, and automation, but business logic still needs a human. And probably a spreadsheet.

The idea of drag-and-drop data warehousing is appealing, but in practice, Fabric still wants you to understand your model. The automation makes life easier, but it’s not going to invent relationships or write SCD logic for you.

What It Means for You (and Me)

I went in hoping for a bit more SQL Server DNA in Fabric, but it’s clear now: this is a new beast. It’s great for fast, scalable reporting. It’s not trying to be your application database.

If I were building something greenfield and cloud-native, I’d be tempted. But for most legacy lift-and-shift scenarios? I’d tread carefully. The speed and integration are excellent, but only if you’re happy playing by Fabric’s rules.

That said, I am glad Microsoft is pushing forward with this. It’s opinionated, but in a way that encourages good design. Just don’t expect to paste your old SQL scripts in and have them run first try.

PostgreSQL: ANSI by Nature, Extensible by Design

I’ve worked with PostgreSQL before. It’s shown up as the source for several data projects I’ve built, and I’ve dealt with its Redshift-flavoured cousin plenty of times too. So I wasn’t there to learn the syntax, I was there to hear someone else’s take on it.

Grant Fritchey didn’t disappoint.

This wasn’t a “Postgres is better” sales pitch, and it wasn’t trying to convert the room. It was more like an honest tour from someone who’s spent enough time with SQL Server to know exactly where the rough edges are, and enough time with Postgres to appreciate the differences.

It’s SQL, But With a Lighter Touch

PostgreSQL feels familiar enough on the surface. You write ANSI SQL. You’ve got indexes, joins, functions. But where SQL Server throws in all sorts of magic behind the scenes, Postgres keeps things lighter and more transparent.

There are roles instead of users. Functions instead of stored procs. Procedures instead of functions. No identity columns, no clustered indexes, no built-in partition switching. And no assumptions about how your data should behave. You bring your own opinions, your own tools, your own automation.

Postgres Doesn’t Try to Be Everything at Once

What I enjoyed most was hearing Grant’s take on the PostgreSQL extension ecosystem. I’ve used a couple in passing, but I hadn’t really appreciated how central they are to working effectively in Postgres.

Need JSON indexing? Install it. Want better stat tracking? Extension. Geospatial? That’s PostGIS. Even tooling like pg_stat_statements (which feels like it should be built in) is something you opt into.

It’s modular by design. That gives you flexibility, but it also means you need to know what’s out there, what it does, and whether it’s stable enough to bet on. That mindset (build your own stack, don’t rely on defaults) is very different from the SQL Server world.

What It Means for You (and Me)

This talk didn’t change how I feel about PostgreSQL. I still prefer SQL Server for most things. But it reminded me that understanding how other platforms work can improve your instincts on your own.

You stop blindly accepting defaults. You question design patterns that only exist because “that’s how we’ve always done it.” And you gain perspective on things like vacuum processes, write-ahead logging, and what “performance” actually means in a multi-engine world.

I didn’t learn PostgreSQL from this session, but I learned how Grant learned it. And that perspective, layered on top of my own, was worth the time.

Takeaways, Temptations, and Technical Tangents

I went into SQLBits 2025 with a free ticket, a cleared calendar, and no particular agenda. Just curiosity and the vague hope of picking up something useful for whatever comes next.

I left with ideas. * Optimised Locking looks like the first serious improvement to concurrency since RCSI, and I’m already thinking of the sprocs I’d love to test it on. * Tempdb governance finally gives DBAs a way to pre-empt disaster instead of just cleaning up after it. * Fabric is clearly where Microsoft wants the analytics world to go, and while it’s not a straight replacement for SQL Server, it’s impressive when used on its own terms. * PostgreSQL, unsurprisingly, continues to be weird, capable, and extremely worth understanding (especially if your work ever touches Redshift) or if you’re the one bridging systems rather than building from scratch. * And Power BI? It turns out your report isn’t “good” just because it renders quickly and doesn’t throw an error. You have to actually connect with the people using it. Who knew.

I’ve been to SQLBits eight times now, and every year I come away reminded that this community is still one of the best parts of working in data. The tools change, the naming conventions get more confusing, and the workloads get bigger, but the core problems (and the smart people solving them) remain.

This year’s flavour? Concurrency, clarity, and cloud scale, with just a dash of chaos.

Exactly how I like it.

coffee

If you’re heading to SQLBits 2026 and want to swap SQL horror stories, you’ll probably find me by the coffee queue. Again.

Share on: