[MUSIC] Arun Ulag: All right, good morning. [APPLAUSE] Thank you so much for joining us. I'm Arun Ulag, I run all of the Azure Data teams at Microsoft. I'm so excited to be here. Such an exciting time to be in data. Such an exciting time to be in AI. Thank you so much for joining us at Ignite. So, one of the quick requests for the folks in the room, we had four times as many people register for the session As we could accommodate in this room. So, if there happens to be a chair somewhere in between
that is unfilled, please scoot up a little bit so that we can accommodate a few other people. So, thank you so much for joining us. We have about 10,000 people joining us online as well, so really, really exciting, exciting session. So, we're going to talk about where we're going with Fabric, and we're going to start with what's on everybody's mind. It's really AI. We all recognize that AI is rapidly transforming the world. No surprise for anybody here. But we also recognize that as exciting as AI is, it is only as good as the data that
it gets to work on, right? Because it is data that is the fuel that powers AI. The best AI models, if you put garbage in, most likely you're going to get garbage out. So, it's become incredibly important for customers to get their data estate ready for AI. Unfortunately, it's a lot harder, more complex, and more expensive than it needs to be. And nothing represents it better than this slide. This is the data and AI landscape slide put together by a venture capital firm in the Valley. You know, they produce a version of this slide every
year. This is, I think, the 10th, 11th version of the slide. But every tiny icon on this slide is a product or technology in the data and AI space. And this is the complexity that's confronting you today because the burden is on you to figure out which products to use, which ones work together, how are they priced and licensed, and bring them together to create business value. That's why, from a Microsoft perspective, we are putting all of our products and technologies together into what Satya refers to as the Copilot and AI stack. So on the
Microsoft side, everything just works together seamlessly so that you can focus on moving your business forward. Now, what my team and I are doing is we're taking the "Your Data" tier of the Copilot stack and we're converging all of the capabilities we have in the Azure data team into just two things, Azure Databases and Microsoft Fabric. So, in this session, we're going to talk about Microsoft Fabric. So, we introduced Fabric, it became generally available just a year ago at Ignite. And with Fabric, we really brought together a set of core workloads so that you can
do everything you need In a single unified SaaS platform to go from raw data to AI or BI value in the hands of your customers. Fabric has a set of core workloads. Each of these workloads are purpose-built for a particular persona, like a data scientist, a data engineer, a data warehousing professional, and a specific task. However, it's not just a bundle of products. We took time, we took years to re-engineer these products so that they actually work together into a seamless platform. It has unified experiences, it has a unified architecture, and we even unified the
business model So we can drive down costs. Now, this vision has really, really resonated with customers. And we see customer adoption for Fabric is off-the-chart. Let me just give you three examples. Chanel, one of the world's leading companies in the fashion industry, adopted Fabric as the next-generation analytics platform. Epic, the largest healthcare company in the US, and one of the world's leading healthcare companies, when they were looking for the next-generation analytics platform, they chose Fabric as well. They chose Fabric because of the strong enterprise capabilities, But they also chose Fabric because of OneLake, because it
gives them a multi-cloud SaaS data lake which allows them to make their information available to their customers as well. Another example is Denner Motorsports. Denner Motorsports runs the Porsche Cup, and they use Fabric's real-time intelligence capabilities to be able to get telemetry from the cars as they're literally racing around the tracks and make good decisions. Now, these customers are not alone. Today, Fabric has over 16,000 organizations, Pretty much in every geography, in every industry, that are using Fabric today, including 70% of the Fortune 500. Let's hear from some of these customers. Satya Nadella: We are
really thrilled to be announcing Microsoft Fabric, perhaps the biggest launch from Microsoft since the launch of SQL Server. [MUSIC] Mike Holzman: Fabric allows us to build things faster. We'll be able to focus on driving real value out of the capabilities that are there. Speaker 2: We switched to Microsoft Fabric at breakneck speed, completing the transition in just two weeks. Speaker 3: Microsoft packaged the enterprise-level functionality so that it can be a low-code or no-code solution. Speaker 4: Our Altec Fabric pilot project was to analyze travel spend. Our vision is to bring together, in one platform,
data from multiple sources. Jimmy Grewal: Fabric's end-to-end cloud solution has empowered us to act on high-volume, high-granularity events in real-time. Enzo Morrone: The presentation of this data, it's intuitive. It's friendly. The real-time intelligence gives the ability to take an action before even the driver notices. Speaker 5: Because everything is integrated, we get information rights protection, as well as security and access policy. Speaker 4: AI will play a significant role in many areas for Altec. Speaker 2: The features and functionality are out-of-this-world. Spealer 3: It's a great time to be in data and AI, and we're
just at the start. [MUSIC] Arun Ulag: So as you can see, Fabric is really, really exciting for customers. [APPLAUSE] Now, as excited as I am about 16,000 customers, it's just the beginning. We're really excited about really bringing Fabric to every organization and every developer on the planet. And one example where we have democratized access to data and analytics at scale is Power BI. Power BI today has over 375,000 organizations that use this, including 95% of the Fortune 500, and we have over 6.5 million monthly active developers. Now, this curve that you see here is actually
the usage growth of Power BI since the day we started, and you can see that it's continuing to grow exponentially, and the growth has only accelerated since we launched Fabric. And the reason I'm talking about Power BI here is because many of you use Power BI today, many of the folks in the audience use Power BI today, and for every Power BI developer, Fabric is just one click away. We make a free trial with no Azure subscription, no credit card required, and we give every developer $17,000 of Fabric capacity over two months so that you
can build something real. You can experience what Fabric can do for you. Now, we're not done from a pace of innovation perspective. One of the things that you've seen us do before with Power BI is really that cadence of continuous innovation. Just like Power BI, we ship a new release of Fabric every single week, right? And every week, you'll see us publish new blogs about the new capabilities that light up. And these capabilities are not just coming from us, but they're coming from you. If you go to ideas.fabric.microsoft.com, you can create ideas or vote on
ideas, and every semester, we take the top-voted ideas and we try to make sure we ship it very, very quickly so that you know that Microsoft is listening, Microsoft is learning, and the product is evolving to meet your needs. Every month, we take all the features that ship that particular month, and we ship a monthly Fabric blog. And each of these blogs are 60 to 80 pages long, just giving you a sense of the level of innovation that Microsoft is bringing to bear. So, we talked about Fabric, and this is what we launched. Now, we
have some exciting announcements for you today. We are announcing the general availability of real-time intelligence. Right, thank you. [APPLAUSE] There is so much real-time data out in the world today, data from IoT devices, data from application telemetry, logs, security logs, so much real-time data, But it's notoriously hard to work with. And with Fabric's real-time intelligence, we make it drop-dead simple, so it's something that you absolutely need to try. Thousands of customers have tried it out during the public preview. We're seeing massive adoption of the real-time intelligence capabilities in Fabric. The other thing that we're doing
is we are simplifying this picture a little bit. We're combining our data engineering, data science, and data warehousing workloads into just analytics. And the reason we're doing that, really, is just to make sure that we make room on the slide for the biggest change to Fabric since we announced it, which is the introduction of Fabric Databases. [APPLAUSE] With Fabric Databases, we're bringing our entire database portfolio to Fabric, and starting with a flagship SQL Server product. You get full world-class transactional SQL performance, all integrated into Microsoft Fabric. And just like Fabric, it's all software-as-a-service, and all
of the data is integrated into OneLake. And the reason we're doing this is we believe that the distinctions between transactional databases, NoSQL databases, document databases, vector databases, in-memory databases, all these distinctions are blurring very, very quickly. And in most AI projects, you're combining and using these things together in conjunction. Which means that, you know, in Fabric, by driving this convergence, we make it much easier for you to build applications to make the transition to the era of AI much, much simpler. So, let's watch a quick video. Voiceover: Microsoft Fabric's unified data platform now brings together
all your data with Fabric Databases. A new generation of autonomous databases that streamline application development. In seconds, provision and deploy a SQL database built upon the same proven industry-leading SQL Server engine, all on a simple and intuitive software-as-a-service platform. Spend less time on resource planning with auto-scaling compute, and get fast, consistent app performance with automatic resource optimization and intelligent auto-indexing, all while working in your favorite tools like VS Code and GitHub. Accelerate innovation with AI-assisted T-SQL code generation and chat-based Copilot assistance. Create unique experiences with the help of built-in vector support and Azure AI integration.
Finally, you can experience peace of mind with databases that are secured by default of automated disaster recovery, high availability, and with all your data replicated to OneLake, accessible by Fabric's analytical engines. Building intelligent AI applications is faster and easier with autonomous Fabric Databases, part of the unified Microsoft Fabric data platform. [MUSIC] Arun Ulag: Hopefully that's really exciting for you guys. We're super excited about it. [APPLAUSE] We're also adding industry solutions to Fabric, and we're making Fabric extensible as well. So what you'll find as generally available today is a range of industry solutions -- thank you
-- everything from sustainability, healthcare, and retail, which is just built into Fabric. So, if you care about these solutions, it dramatically accelerates time to value. In May, we also announced the Fabric Workload Development kit, which makes Fabric extensible, so you can extend Fabric. And if you're an ISV, you can bring your own workloads to Fabric. Now, today I'm announcing that it's generally available. We're also excited to show a whole range of ISVs that are actively extending Fabric and bringing their own workloads to it. And these are not just trivial integrations. They're deeply integrating into Fabric,
making sure the data lives in OneLake, the artifacts live in the same workspace, they use the same permissions model, Et cetera. A whole bunch of these ISV solutions are available in public preview today, so those are the ones that are highlighted on top, and everything else is being worked on, and it should reach public preview in the coming months. So, when I switch forward and I think about the Fabric roadmap, there's four areas we're working on. The first is really an AI-powered platform that allows you to dramatically accelerate your time to value. The second is
OneLake, an open and AI-ready data lake. The third is making sure That these AI capabilities reach every business user, and all of these capabilities will be built on a mission-critical platform. So, to go much deeper and show you some exciting demos, I'd like to invite Amir Netz, technical fellow at CTO. [APPLAUSE] There you go, Amir. [APPLAUSE] Amir Netz: I'm so excited. We're actually going to spend the rest of the session just looking at the product, experiencing, seeing demos, and we're going to use the same framework That Arun presented with the three pillars as the guideline
here. As a structure of the presentation, we'll start with the AI-powered data platform. This is where, really, what we are presenting here is a complete platform for everything that you need for data, for every workload, whether it's transactional, whether it's analytical, whether it's real-time, whether it's batch. Everything that you need is in one platform, all integrated in both the experiences and the architecture, all powered by AI. And to show us what it means to really build a data tier For your application, I'm going to invite Patrick to the stage, and we're going to take a
look. Hey, Patrick. PATRICK LeBLANC: What's up, Amir? [APPLAUSE] Amir Netz: All right. So, we're going to see end-to-end. It's going to be a bit different this time, right? PATRICK LeBLANC: A bit different, a bit different. Up until now, the only things that Amir and I have talked about on stage together is complete analytical solutions. That's all we do. But this time, it's going to be a little different. And so, we've built this app. We've built this app called Contoso Outdoors, and the entire solution is built in Fabric. Amir Netz: And this is not an analytical
solution, right? PATRICK LeBLANC: This is not. This is a complete data solution, and we're going to make a change to that. We're going to make a change to that. Let's take a look. So, you can see this is Contoso Outdoors, and this is where all of our vendors and our suppliers go to talk with each other To make sure that we have all the products that we need. And if we switch over to Fabric, you can see this is a complete solution. We can do everything in Fabric, visualizing data to storing data, to ingesting data.
We even have real-time telemetry built in, so we can track everything that's going on in the database. But the star of the show today, Amir, is one of my favorite things where I started my career at. It's a SQL Server database. We're introducing the SQL Server database. I even wore a shirt, right, to commemorate that moment. And so -- but we need to make some changes, and before we make those changes, we know data is a team sport. And so we have built-in source control in Fabric, and so what I'm going to do is I'm
going to use our new branching capability to not only create these objects and move them over into another workspace, but I'm going to create a new branch in DevOps or GitHub. Amir Netz: This is directly to GitHub? PATRICK LeBLANC: Absolutely. I don't have to do anything. And once it's all synced over to the workspace, instead of me introducing the break and change, I create my own feature branch, and you can go into your SQL database. And this is not some scaled-back version of SQL. You can create tables. You can create views. You can create stored
procedures. Amir Netz: It's really compatible with the T-SQL that you know and love, from SQL on-prem, or SQL in Azure. Everything is there. PATRICK LeBLANC: Absolutely. You can create indexes if you want your queries to run fast, right? Just kidding. And so, but I need to add a view to this database. And so, Amir and I have been writing T-SQL since the 1900s. [LAUGHTER] And so, I'm not going to write any T-SQL. What I'm going to use, I'm going to use Copilot. I'm going to do Copilot-first development, and I'm going to ask Copilot, can you
create this view for me that I need for my application? And just like that, it creates the T-SQL, And I get that T-SQL committed back to my database. No hands. I just ask it to do it for me. But I need to expose this data to my app. And I can use the traditional approach of creating a data layer in my application, but instead I'm going to use GraphQL. Amir Netz: And GraphQL is great when you're building web apps because everything is JSON-based. PATRICK LeBLANC: Absolutely. And it's an open format. And so, but, instead of
me writing it and embedding it In the application, I'm just going to use an API, Amir. Amir Netz: Okay. And just take the endpoint of the API. PATRICK LeBLANC: And I'm going to copy that endpoint, and I'm going to paste it over in Visual Studio Code, and I'm going to paste my query there, and then I'm going to compile my application, and all of my developers, all of my vendors, all of my suppliers can go in one place to ensure that they have all the stock levels they need. And I just did that in just
a couple of clicks. Amir Netz: That's awesome. PATRICK LeBLANC: Yeah. Yeah. And so finally, I want to get this committed back. Amir Netz: The data is going to be in OneLake, right? PATRICK LeBLANC: Yeah. So, in my SQL database, because my SQL database is automatically integrated in Fabric, and it moves all the data, it syncs all of my data to OneLake, not only can I do operational, but I can create beautiful reports that are blazing fast that won't contend with the performance of my application. Amir Netz: Yeah. PATRICK LeBLANC: It's truly remarkable. And so, now
that I'm all done, I want to get this committed back to my source control, use the integrated source control In Fabric, and just click "Commit." How cool is that? Amir Netz: That's super cool. What do you think? [APPLAUSE] So a few things here. Number one is, you see the source control. We have eight new items in Fabric that are now supporting the CI/CD of Git. PATRICK LeBLANC: Yep. Amir Netz: And by the end of the year, everything that we have in preview will be there. PATRICK LeBLANC: Yep. Amir Netz: That's really advanced. The other thing
you mentioned is the GraphQL. PATRICK LeBLANC: Yeah, it's exciting. Amir Netz: And we have an announcement. The GraphQL API for Fabric is now generally available, which is awesome. PATRICK LeBLANC: Which is awesome. It's amazing. So, less code for me to write, right? Just an API. Amir Netz: Now, Arun mentioned these industry solutions, right? PATRICK LeBLANC: Yeah. Amir Netz: And so, we'd like to show you a little bit of that. It's not really a full demo, but what's going on here? PATRICK LeBLANC: So, sustainability is important to most organizations, and they have KPIs that they need
to hit. But imagine trying to collect all the data you need into one central place. The data is not only disparate, but it's in different formats. With this new industry solution, I basically click a button, give it a name, and all the items, All the artifacts I need are quickly deployed out to my Fabric environment. And then, I can actually take a look at that data to make sure my business is truly sustainable. Amir Netz: And we're bringing more and more industry solutions in. We expect to have around almost a dozen there from every industry,
healthcare, retail, telecom, everything that you need. It's coming to Fabric. So, whatever industry you're in, you're going to find that Fabric is just designed for your solutions. PATRICK LeBLANC: Absolutely. Okay. Thank you, Amir. Amir Netz: Thank you so much. PATRICK LeBLANC: Thank you. Amir Netz: Okay. Moving to the second pillar. [APPLAUSE] This is the Open and AI-Ready Data Lake. This is really the world of OneLake, the OneDrive for data. If you haven't heard about OneLake, well, you've been sleeping under a rock for the last year. This has been an amazing, amazing journey with OneLake. This
is the OneLake for the entire organization. It's infinitely scalable. It's globally deployed. It's one, only one, OneLake for the whole organization. All the workloads of Fabric store their data in OneLake. All the data is always stored in an open format. There is no proprietary format anywhere in Fabric. And once the data is there, well, it's managed. It's governed. We handle the lineage. We're going to talk more about it when we talk about the catalog. Wait for that. But it's all managed by the catalog. And boy, you guys have been responding to OneLake like there is
no tomorrow. Just take a look at that. We get 21 billion interactions with OneLake every day. Four million shortcuts. The way to connect your OneLake to all the existing storage systems that you have out there. Four million of those shortcuts have already been created. Every 16 weeks, we double the volume of data that is stored in OneLake. And to show us how we get the data into OneLake, I'm going to bring in Shireen. Hey, Shireen. Shireen Bahadur: Hey, everyone. [APPLAUSE] Hi, everyone. Yes. Amir Netz: So Shireen, we can bring the data from everywhere into OneLake.
Shireen Bahadur: Yes. Amir Netz: There are several mechanisms, right? Shireen Bahadur: Exactly. So there are many different ways to bring data into OneLake, but I want to hone into a couple that are really important. So let's start with shortcuts. Shortcuts provide virtualization connections across domains and clouds, and it basically allows you to virtualize your data all in one place, in this case, OneLake. And you can connect to, you know, different storage locations, file systems with Microsoft and non-Microsoft sources, such as your AWS, GCP, Snowflake. And there's absolutely no data movement or data duplication. Amir Netz:
So, just a way to virtualize all The data on-prem in every cloud, everything in OneLake. Shireen Bahadur: Yes. Amir Netz: Great. Shireen Bahadur: Yeah, absolutely. Amir Netz: And then there is mirroring. Shireen Bahadur: Exactly. So, mirroring is a continuous data replication solution for your operational databases. So, that includes all databases or specific tables. It really depends on what you want to do. So you can bring all that change data directly into OneLake, and our engine continuously replicates that data For you using our change data captures or CBC technology underneath the hood. Amir Netz: And it's
super simple, because all you have to do is just point to the database and say, I want to mirror that database, and whoop, it just shows up in OneLake. Shireen Bahadur: It just shows up. So, should we dive a little bit deeper into mirroring? Amir Netz: Yeah, let's do that. Shireen Bahadur: Okay. So, mirroring has been an absolute hit in the past year. So, we have these variety of different sources That we have currently, Snowflake GA, which we just announced recently. And as of today, we have announced mirroring for Azure SQL DB as generally available.
Isn't that great? Amir Netz: That's good. Shireen Bahadur: Exciting, yeah. But it doesn't stop there, right? We're continuing to listening to your guys' feedback and, of course, improving the product capabilities. So today, I'm excited to announce that we're introducing four new sources that are coming soon. We have mirroring for SQL Server, SQL Server 2025, PostgreSQL, and Oracle. So now, over the course from today and the next several weeks, you'll see these lighting up soon. So please stay tuned. It's really, really exciting. Amir Netz: Yeah. Yeah. Now, you can see that we're graduating more and more
databases we support with mirroring. But there's so, so many sources out there that we have to connect to, and we don't want you to have to wait for us. So, there's a new thing that we're announcing today, which is called Open Mirroring. Shireen Bahadur: Exactly. Amir Netz: So, what is Open Mirroring? Shireen Bahadur: Yeah, Open Mirroring. So the goal of mirroring, right, in general, is to have the flexibility for customers to bring data in from anywhere, right? So now, with Open Mirroring, which is in public preview as of today, it helps you enhance or accelerate
to bring any data from any application or any source directly into Fabric. So, all you really have to do is bring that data into a landing zone, and we take care of the rest. Amir Netz: So, what do you have to bring in? You have to bring, for mirroring to work, you have to bring the initial snapshot of the database. Shireen Bahadur: Yes. Amir Netz: And then start bringing to us the CDC, the change data capture feed, of the database. Shireen Bahadur: Yes. Amir Netz: You drop it into the landing zone, and then we make
it into Delta Table automatically for you. Shireen Bahadur: Right, yeah. And it runs automatically, like how mirroring actually works, right? Same thing. Amir Netz: It's super, super simple, right? Shireen Bahadur: It's really simple. So, let's take a look to see how easy this actually is. So, directly from my Fabric home page, I'll create a new item. And over here, you'll see all of my sources, right? You have the Cosmos DB, which is in preview. We have Azure SQL Database, Databricks Catalog, as well, and Snowflake. And you'll see a few other ones coming soon, too. But
now we have this really cool capability called Mirror Database, which is our Open Mirroring functionality. So, I'll go ahead and click on it, I'll give it a name, and then I'll hit "Create". So, what I want to do now is I want to show you guys the inside mechanisms of how Open Mirroring actually works. So, we have that landing zone, right, which you'll actually see over here. But I have some orders data on my, you know, desktop as a CSV file. And if I open it, I can see all my rows And my headers directly
here. And like how Amir was mentioning, it's so simple. All I have to do is take that order CSV file and drag and drop it into that landing zone. And immediately, there's a file there. So, what's happening in the back end, right, we're looking at the initial snapshot. We're looking at change data. We're making that file ready in an analytics-ready format. Amir Netz: So that was the initial snapshot, and you automatically converted it into a data table. Shireen Bahadur: There you go. That table automatically shows up here, right? So now, if I want to go
monitor, I can use the replication status, or I can go to my SQL Analytics endpoint, which you guys are all familiar with, right? So, I'll go to my SQL Analytics endpoint, and I'll verify that my rows are actually there. And we're working with about, you know, 62 rows of data. And as you know, orders data is always being created or modified. PATRICK LeBLANC: So, we need to introduce these CDC changes. Shireen Bahadur: CDC changes, exactly. So now, if I zoom into the first row over here, I'll notice that my price for that particular row is
incorrect. And I want to modify that to, let's say, about $100,000. Amir Netz: Okay. Shireen Bahadur: So, what I have to do is only provide and create CSV files with only the changes, right? And the thing over here, Amir, look at this particular CSV file. The difference is that we have a column here called row marker that looks at the operations for each row. So, if I look at the first row, I'll see that row number one, I'm changing that particular row to $100,000. Amir Netz: And the marker of four, number four, says that's a
change. Shireen Bahadur: It's a change. In this case, it's an upsert, right? So now, what I can do over here is that I can add even more operations with the same CSV file. If I look at the next three rows, I'm deleting them, and that row operation is set to two, which means delete. Amir Netz: Delete. Shireen Bahadur: Exactly. And I can correspond those three rows to my orders table. Amir Netz: And one will mean that you insert? Shireen Bahadur: Exactly. You're completely right. So, the next five rows, I'm inserting that row in, and that
one means insert. So now, I know that these will be inserted into my orders table. So, I could have actually separated these into different CSV files, but I packed them into one for this particular example. So now, once again, I'm going to drag and drop that CSV file With the changes directly into the landing zone. And once again, it's already in that analytics-ready format. So Amir, should we check to see if those changes have been reflected? Amir Netz: Yeah, let's see. So, we updated the first one, deleted two more. Shireen Bahadur: Yes. Deleted three, and
then we added five. Amir Netz: Yes. Shireen Bahadur: Okay. So now, look at the first row. We have updated that price to $100,000. Amir Netz: Yes. Shireen Bahadur: So check on that aspect. Amir Netz: Yes. Shireen Bahadur: I don't see rows two, three, four, so they actually have been deleted. So, check. Amir Netz: Yes. Shireen Bahadur: And then the five rows, 63 to 67, have actually been inserted in. Amir Netz: Yes. Shireen Bahadur: So, how simple is that? Right? Isn't that totally simple? Amir Netz: Yes. The point is, it's very geeky, but we really want
to show how simple it is. You can do it with Notepad. Now, of course, you will not do it with Notepad. We know that. But you can write Python code to do that. You can write C-sharp code to do that. You know, any -- you can build it yourself, or you can use one of our partners. Shireen Bahadur: Exactly. So, the second way to actually use Open Mirroring is Integrating it with our vast partner ecosystem. So we have partners like Stream, Oracle, MongoDB, Datastacks, that are actually integrating their data solutions with Open Mirroring APIs. And
we're really excited to work with these partners in the next few months to increase the number of mirroring sources. Amir Netz: And best of all, it is still all free. Shireen Bahadur: Yeah. So, Open Mirroring is new, right? But it still sits under the umbrella of mirroring as a whole. So, that means all your replication from your sources into OneLake is free, allowing you to just focus on bringing your data gravity into Fabric. Yeah. Amir Netz: Awesome. Thank you so much, Shireen. Shireen Bahadur: Yes. Thank you, Amir. Have a great conference, everyone. [APPLAUSE] Amir Netz:
Okay. A lot of you use Fabric. Lots of you have a lot of data in Fabric. You want to make sure it's secured, it's governed, so we are constantly working on it. So, first thing I want to announce is certification. You're using it everywhere. You want to make sure that the solution you're building is certified, built on a certified platform. So, we are now announcing the last major certification of Fabric, which is the FedRamp certification that you need when you work with the federal government of the U.S., something that is necessary. It's here. This month
we announced it. So, that is the last in the line that we actually need of the major certifications. All the other certifications are basically derived from those six certifications that we have here. Of course, features. Lots and lots and lots of governance features and security features. Lots have shipped. Lots are constantly being worked on. It is the top priority for us to make sure that you have everything you need to govern And secure your platform. And to show some of the innovation coming in this space, I'm going to invite Adi to the stage. Adi, hi.
How are you doing? Adi Regev: Hello. Hello. Amir Netz: Faster. Adi Regev: Hi, everyone. [APPLAUSE] Amir Netz: Okay. So, we're working on a lot of features, right? Adi Regev: Right. So, Fabric has so many built-in governance and security features today. And let's talk about some of the new announcements that are coming up. Amir Netz: Surge protection. Adi Regev: Surge protection. Right. So Fabric's on fire, right? It's being widely adopted by so many enterprises. And some of that means that they're starting to actually leverage it for their mission-critical tasks. We need to make sure these aren't
compromised and remain a top priority. Now, for that, we now introduce controls for capacity admins so that they can actually set thresholds. And if they reach those thresholds, any background jobs running will just not run, right, and prioritize those mission-critical needs. Amir Netz: So, you can actually deprioritize the development workspaces, or the test workspaces, to make sure that the most important part, the mission-critical part of the application, continues to run even under major loads on your capacity. Adi Regev: Right. And we also provide flexibility there So that you can set different thresholds and limits per
different capacities so you have that granular control. Amir Netz: Awesome. Now, workspace monitoring. Another big thing we're announcing today. Adi Regev: Right. So, visibility is key, right, especially in all of these mission-critical pieces. Now, we provide already a lot of monitoring capabilities in Fabric, admin monitoring for admins or the monitoring you have for data owners. But now, we provide for application developers, workspace monitoring so that they can actually track in granular what's happening with their relevant projects, right, and perform root cause analysis, track downtime or performance issues and see all of those in relevant logs.
Amir Netz: So, this is kind of the monitoring that you need for DevOps. So, we really want to understand how your application is performing, how it's working, what's going inside. So, that's workspace monitoring. It's actually built on top Of the real-time intelligence technology we have. Adi Regev: Right. It's all saved into an event house so that they can later query those and, you know, based on that, perform ad hoc queries or even save the query sets for later. Amir Netz: Yeah. You know, can run any query you want using KQL language. That's awesome. Adi Regev:
Exactly. Amir Netz: Okay. Now, we have the big one. This is your baby, Adi, right? Adi Regev: It's definitely one of my favorite child, right, children. The OneLake Catalog, which is now generally available. So, this is actually the evolution from the known and loved OneLake Data Hub into a full-blown catalog for your OneLake data. And with that, we allow all Fabric users, so data engineers, data scientists, business analysts, all the Fabric users to easily discover all of their data, right? They can then manage them easily in place from within the catalog. And they can also
govern their entire individual data estate with relevant insights and recommended actions on the relevant data. Amir Netz: So data discovery, data management, data governance, all in one. Adi Regev: All in one. Amir Netz: For everything we have in one. Adi Regev: For all item types, right? Amir Netz: Okay. Let's take a look. Okay? So, we'll start with discovery, right? Adi Regev: Right. And discovery has been a key challenge for enterprises. So, I'm in the Explore tab in the new OneLake Catalog, and I can start by browsing my domains, right? So, I'll browse my domains and
subdomains to search for the relevant data per my business unit. I'll select sales in this case, because I'm coming from there and I want to build a relevant report. I can explore by endorsed items, or favorites, or filter to a relevant workspace. And then I'll select the relevant content that I need. Right? Now, this has been a key ask to support all item types within the catalog, and now with that evolution, we actually do that, and you have all of the OneLake data estate at your tips. So, I can search for all of the data
items, right, like lake house, semantic model, and the new SQL database, which is now introduced in Fabric, the popular insight items, like Power BI reports or dashboards, process items, like pipelines or notebook, all of the data and items at my fingertips. Right? Next, another key feature has been for tags, right, So that you can curate your data and optimize discovery based on tags. We now support that, and I can select relevant tags to filter down my search. I'll look into the warehouse sales booster next, and I can see relevant metadata, like description, owner, endorsement, sensitivity
label, but I can also browse its schema, so it's actual tables and views to see if that's the data I'm looking for. Amir Netz: That was a major ask, going all the way to the column. Adi Regev: Major ask. Major ask. So, I'll move on to semantic model, which seems more fitting to what I'm looking for. It's also endorsed as master data. Again, I'll explore the tables and columns, and based on that, I see it's the item I've been looking for. Right? So, once I've found what I need, I can perform relevant actions. For instance,
I can click on "Explore This Data" to actually derive key insights on the fly, visualize those insights, and once I have what I need, I can either save it for later or share with others. Amir Netz: Okay. So, one place to find and to find, discover every item we have in Fabric, whether it's data item, process item, insight item, and so forth. Now, we have the need now to manage it. Adi Regev: Right. Amir Netz: And I don't want to go every time to the workspace to do that. I can do it all from within
the catalog. Right? Adi Regev: Right. So, the next piece is allowing you To manage your items in place easily. Amir Netz: Let's take a look. Adi Regev: Let's have a look. So, I've moved on. I'm in that same item. I moved on to lineage view, where I can see now, for instance, end-to-end relations for a selected item down from the store analysis report, all the way up to the SQL database. I can move on to the list view to see additional information like endorsement or sensitivity, and here, for example, I actually see that some are
labeled as confidential, While others are labeled as general, but I remember that the Global Store SQL database actually contains sensitive information. Amir Netz: Yep. Adi Regev: So, I want to go ahead and fix that and adjust the sensitivity, and I can do that all from within the catalog. I'll easily access the settings, and it'll adjust the relevant sensitivity label, and once I do that, not only does it fix that SQL database, but actually, all of the downstream items inherited that sensitivity label automatically To ensure they all remain compliant and consistent. I'll move on to the
monitor tab, where I can see all of the last runs, and I can see the last one failed, so again, I can trigger refresh and refresh that outdated item directly from within the catalog. And I can track permissions and manage my permissions for that item, both internal and external shares, all available within the catalog. Amir Netz: So, notice that we never have to go through the workspace ever. Adi Regev: Right. And now that, for instance, my item is up-to-date And it's labeled correctly, I can go on and collaborate, share it with others, and perform other
activities. Amir Netz: Okay. Governance. Okay, Governance is not about managing individual items. It's about the entire estate that you have, right? Adi Regev: Right. Right. Amir Netz: So show us what we have in Governance. Adi Regev: So Governance, which is coming soon in preview, allows you to govern your individual data estate, get key insights, drive actions, and again, I can filter By a selected domain, or I can, you know, choose to view all the insights on my domains at once. I get key insights at a glance which are relevant to me, or I can click
to view more, where I'll see a detailed report of all of my individual data estate, so data hierarchy, data inventory, data refreshes, my entire status, right? But I can also track how secure and compliant my data is, with sensitivity label coverage and distribution by item types, and I can also see how curated my items are with my use of tags or descriptions or endorsement, so it makes it really easy to understand from a bird's-eye view what's going on. And back in the main view, I can actually see recommended actions, especially for me, so actions like
increasing sensitivity label coverage or refreshing outdated items. And if I click on a card, I'll see the details, I'll see an explanation of why I'm getting this recommended action, and steps I can take to address it. And last, I can -- we mentioned that we have so many built-in governance capabilities and also integrated with Microsoft Purviews, et cetera, from within Fabric, so I get central access to all of those from within the govern tab. Amir Netz: That's awesome, and so beautiful, right? What do you think, guys? [APPLAUSE] Now, not only that the catalog itself is
extremely useful for anybody who's using Fabric, the catalog is really the gateway to the rest of the Microsoft stack. Adi Regev: Right. Amir Netz: You see all the products that we have at Microsoft that integrate with the catalog, whether it's the Microsoft Excel or the Copilot Studio, or we've seen in the AI keynote, we've seen how it integrates With OneLake, all that, it's integrated everywhere. And I'll give you an example, for example, in Microsoft Teams, this is how the catalog looks like. Adi Regev: Right. Amir Netz: It's exactly the same way it looks like. Adi
Regev: So, we already have today the OneLake Data Hub there, and soon you'll have the full-blown OneLake Catalog. It's that very same one I showed you. You'll be able to filter by domain, see all item types, the rich metadata, and from there, access everything. Amir Netz: Thank you so much, Adi. Adi Regev: Thank you. [APPLAUSE] Amir Netz: Okay. Taking us now to the last part, the last pillar, it's the AI-enabled insight. This is the world of the business users. This is the world of Power BI. Power BI has been around for 10 years. It is
the primary tool for every business user to get insight into their data. We have so many, so many. Tens of millions of users of Power BI. I want to invite Patrick to just join me on stage And show us what are we doing here. How do we bring AI to the world of the business user? Patrick Baumgartner: Hello, everyone. Yeah. So, in Power BI, you know, the key thing for us has been thinking about how do we use AI to really simplify how everyone experiences their data and how everyone interacts with their data. I'm going
to go to the next slide. And when you think about personas across the board, and you've seen a couple of demos today about Copilot coming in and helping me generate reports quickly, it can help me get answers to questions. And as we've looked at how people are actually using it, we've seen incredible productivity boosts. Amir Netz: And we actually measured it. Okay. I want to share with you a study. A real, you know, about 200 people that we actually measured in the lab to see the productivity. And look at the productivity gain here. It's 52%
of the performance or ability to complete the task faster. Patrick Baumgartner: Yeah, exactly. So, if we take people, we give them a task and we give them a task with Copilot and give them a task Without Copilot, we actually see a dramatic increase in productivity. And a lot of times we think about AI, we think, hey, AI is going to do everything end-to-end. And it's not always that. It's always about, you know, helping me get to that next task a little bit faster, adding ambient insights. So, really exciting results for us. Amir Netz: Yeah. So
you get faster results, you get more accurate results, and the most important thing, 90% of those who used the Copilot wanted to continue to use it. Patrick Baumgartner: Yeah. And one of the things we heard, we're hearing from all of you is that how do we streamline how people get access to Copilot? How do we understand cost? How do we make it more available? Amir Netz: And this is really where we have a great announcement because it means that now you can have what we call the Fabric AI Capacities. You can designate a capacity in
your tenant to be covering all the reports you have in the organization, whether the reports are coming from a workspace that have capacities assigned to them or those That don't have capacities assigned to them. All your reports can be powered by the Copilot using that capacity. Patrick Baumgartner: Yeah. So, the Fabric AI capacities is a new mechanism you can use to more easily deploy AI and Copilot to your users. A very exciting announcement. Amir Netz: Okay. Now, we have AI Skills. Patrick Baumgartner: Yeah. So, one of the things we wanted to start by talking about
is, and now you saw a lot of stuff with AI Foundry and other ways To build chat experiences on top of your data and integrate that into your apps. And we have a way to help you simplify that in Fabric as well, because you have lots of different types of data. And to understand that data, you need to kind of help bring it together and add a little bit of expertise. And then that helps you streamline how users get access. And that feature is called AI Skills. So, let's go ahead and take a look at
a quick demo so you understand this capability. So, here I am in the same Contoso Analytics workspace. We've been using a few of these for these demos. And I've got lots of different types of data that's all coming together. And what I want to do is create a customer data expert that pulls data from a couple different sources but brings it together in a way I can kind of control. And so to do that, I'm going to create an AI skill. So, this is something we've had in preview for a while. And previously, you could
only use a lake house as the data source. And now we're excited to announce that you can add additional data sources as well into the same AI skill. So, I'm going to start by grabbing a KQL database that's got some real-time delivery information about packages for my customers. And just by selecting the data, I can start asking questions and using a large language model to give me answers from that data. So, I can say, break down the number of package delivery trips per month and what are the most deliveries. And automatically, it recognized the type
of data set, generated the correct Kusto query, and gave me the answer. So, the setup is really, really simple. And I could ask statistical questions as well. So, what's the 99th percentile for trip distance? It's going to generate the correct query to go ahead and give me that. Amir Netz: The data could be everywhere. It's not just in one database. Patrick Baumgartner: Exactly. And I don't necessarily always want to unify that into one data structure. I want to be able to just kind of link to where the data is. So let's add a couple other
data sources. I'm going to add a semantic model from Power BI. And I'm going to add a lake house where we have some additional data. So, for customer loyalty, for orders and sales. And so I can just select the data I want. And that's all the setup I need to do. And so now what I'm going to do is go through and select the specific tables I want the AI to have access to. And you can see from the lake house, there's a couple different tables. And here's the lake house. And then previously, the semantic
model as well. So again, incredibly easy setup. And now the AI has access to the schema, so we can ask kind of questions about that data automatically. Amir Netz: And the AI will figure out where to get the data from? Patrick Baumgartner: Exactly. It's going to just look at my question. It's going to route to the correct database and generate the correct query. So, I can say, what's the name of the top loyalty customer? It found Teodoro. And in this case, it generated a DAX query to go ahead and pull that information out. But what's
really cool now is I can use that information in the chat. So I can say, okay. We have the chat context. So I can say, what are additional information about him? And it's going to know Teodoro from the first answer and then use that to look up information in the next database, in this case, the lake house. So again, the end user doesn't have to know where any of this data is. They can just ask questions. And the AI is kind of traversing across these data sets. And I can keep asking questions along this route.
And it's super easy to use. So, there we see the data coming from now the lake house for the additional information. But anyway, it's a super cool way to bring data together, and then you can think about how to integrate these into your other chat experiences all the way up the stack. Amir Netz: Okay, we have a couple of more demos that are really not about what we ship today, but what's coming in the next few months. But I think it's really, really worthwhile to see kind Of what's in the pipeline. Okay. The first one
is how we present and how we provide this Copilot experience for the business user. Now, typically, we say, hey, business users, you go to a report, and then on the sidebar, you can ask questions about what you see in the report, but you can do better than that, right? Patrick Baumgartner: Exactly, so we don't want end users to always have to know what report to go to, because they don't necessarily know where the data is. So, let's take a look here at this demo, and what you notice here is I'm in the Power BI homepage,
and there's a new icon up in the corner that's a Copilot icon. And if I click here, I'm getting an immersive Copilot experience that knows how to traverse all the data I have access to. So, I can come to this one location, so you think about more of a business-style user, and they can just start asking questions, like, you know, how many loyalty program members did we add this month? Now, this is smart enough to look, do I have access to AI Skills? Do I have access to semantic models? Do I have access to reports?
It's going to figure out what the right information source is and answer my question. So in this case, we added about 918 members, and best of all, it gives me reasoning of why it found this answer, and a link to go back to that original source if I want to kind of do further analysis, but it's still a conversational chat, so I can ask more questions, and so, I can say break this down by source, and it's going to go ahead and generate, now, a visual for me, and I can say copy-paste that, go ahead
and use it. Maybe I want to get the table of information, so I can ask, like, what are the top members with anniversaries this month, because maybe I want to go send them an email or something, so now I have the table. Amir Netz: But it's not limited to just one source, right? Patrick Baumgartner: Exactly. The best part here is, if I want to switch gears now and ask about, say, HR, say what are the open positions that we have, we're smart enough to look at, again, what you have available, and switch over now, and
it's going to bring back an answer from my HR reporting database, And so, I can traverse these very easily. And so finally, I can also ask about just reports I have access to, so hey, list the most interesting reports about a specific topic, and now it's gotten me that list, so I can go ahead and open that report. And then, of course, we have Copilot baked in here as well, so I can continue just using voice as my interaction mechanism here. Amir Netz: That's awesome. Yeah, so a brand new way for business users to work
with AI on the top of the data, and now we're going to get To the last part, okay? Last demo, but now you have to really, really concentrate, okay? We've had BI, Power BI, connecting the world of the business users and the business application to the world of data, and until now, it was all about analytics, but now Fabric is more than just analytics. It has both the transactional databases and the analytical capabilities all in one. So, can we really bring the world of Power BI and analytics with the world of transactional databases? So, I
want to teach you a new word today that you're going To remember for the next few years. It's called translytical. It's a combination of transaction and analytical, and it's really about creating an application that combines the two elements. And we're going to show you a demo here how we take the database plus the data functions in Fabric plus Power BI and create translytical applications in Fabric. And what's really cool about it is the Power BI canvas transformed from being a read-only canvas to being a canvas that you can actually update the operational database directly within
your report. So, let's take a look at that. Patrick Baumgartner: Yeah, so this is a sneak peek of what's coming here, and I think this is the most exciting demo, I think you're going to see this week, because it's taking the databases in Fabric, which I think is the most exciting thing this week, and taking it one step further. So, let's go ahead and take a look. So again, I'm in one of my solutions here, and I have my data that's being stored in my SQL database. I've got a bunch of other information coming together,
And what I want to do is make it so my end users can update data in the database directly from where they work. And so, if I look at my opportunity database, I see a bunch of sales information that we have going on, and there's always discounts we want to add or things we want to change, and I don't want necessarily, people to move from their analytical area to a different app to be able to do that. So we have, as Amir mentioned, these user data functions inside of Fabric, so I can write code here
that is updating the data in that SQL database. So, here you can see some of these different update statements, and we added a couple functions to update either one opportunity or a group of opportunities. And I can go ahead and test that out. And so if I -- you know, I'm going to update the status to open, I'm going to say that this is the specific opportunity. I'm going to give it a 50% discount, and call it a test update. And if I hit "Run", it's going to go ahead and update those records in the
database, Or that specific record in the database, and it's working, so okay, we're ready. Amir Netz: Now let's bring Power BI in. Patrick Baumgartner: Yeah, exactly, because now I want to put an app experience around that, and I want to use Power BI as that home experience. And so you're going to see us introduce a couple new buttons and text entry fields, and what I can do is take one of the buttons in Power BI, and now assign it to that function. So I'm going to take the "Submit" button, I'm going to turn on the
action for the data function. I'm going to select my workspace, I'm going to select the data function that I created, and then I'm going to tell it what data to come from either the report, or the entry field that I have in the report. And I'm going to feed it back into that user data function, so it can go into the database. So, now I'm going to switch over to runtime, I published that report, and now let's see how an end user could use this. So, I can still slice and dice, and use all the
analytical features of Power BI. So, now I've filtered it down, I've clicked a specific opportunity. Now I'm going to give it a 25% discount, and I'm going to say, hey, I need to close this deal. And when I hit "Submit", that record is going back into the database, and you can see it updated almost immediately into the table. And just to kind of show a little more flexibility here, we can also select groups here, so I'm going to set a set of opportunities, everything that's high-risk, expiring in 60 days, and let's bring the quantity up
to eight or nine. And now this is kind of interesting, Because I've got open and lost in one opportunities. I want to give a 30% discount for upcoming deals, and so in the function, it's smart enough to only write to the open deals. And so I hit "Submit", and again that data is going back into the database, and you can see it updating in the comments. If you think about this, this really blows the doors open of anything you can imagine, now you can start building in Power BI. Amir Netz: The number of scenarios we
have here, if you've been using Power BI, the number of scenarios we have here, just incredible. My opinion, this is the biggest upgrade to Power BI since the inception of Power BI, everything before that. Patrick Baumgartner: Yeah, very exciting. Amir Netz: So, very, very exciting. [APPLAUSE] Patrick Baumgartner: All right, well, thank you, Amir. Amir Netz: Thank you. Thank you so much, Pat. So, we've seen, we've worked through, we've seen the entire unified data platform for AI transformation. It was only 35 minutes, but if you want To get three days' worth of content, well, join us
at FabCon in Vegas. We have three days full, everything you need about Fabric, about the Fabric Database, about Power BI, everything you want, join us. And that's it, thank you so much. [APPLAUSE]