[ MUSIC ] OLIVIA SHONE: Hello. Happy Thursday, everybody. Excited you're still with us despite the snow outside.
So thank you for powering through all the way to the afternoon. I'm Olivia Shone. I work on the Azure AI team, leading the product marketing for the services.
And I'm joined today by another Microsofty, Deborah Chen, who works in the Cosmos DB team, and Kunal Mukerjee from Docusign. And we are so excited to learn much more about Docusign's journey today. Kunal is going to be sharing Docusign's journey and transformation, how they built the project that we all know and love and use frequently, I'm sure.
And you'll hear about how they scaled really quickly. They really focused on accuracy. And they drove a lot of optimizations across their applications.
So as we go through, Kunal will be highlighting key learnings that they found. And Deborah and I will go into a couple of insights from the Microsoft side. We do ask that you hold Q&A to the end, but we timed this, and there really should be time for Q&A.
So looking forward to the conversation as well. And before I hand the mic over, we have a short video to highlight some of the work that Docusign has done. And then it'll be over to you, Kunal.
REBECCA DENMAN: Poor agreement management costs businesses nearly $2 trillion in global economic value each year. KUNAL MUKERJEE: We needed to transform how businesses work with a new platform. We call it Docusign Intelligent Agreement Management.
IAM is built with Microsoft Azure. Azure OpenAI service helps our customers organize, interpret and act on contracts in real time. It's more than speed.
It's built to get smarter with each agreement. A smart repository called Docusign Navigator helps surface key details previously locked inside agreements, like effective and expiration dates and total contract values. REBECCA DENMAN: We use Azure SQL DB Hyperscale to store all of our agreement data.
Azure Cosmos DB ensures seamless data flow across regions, giving our customers timely interactive responses on the data inside their agreements. KUNAL MUKERJEE: Our contract lifecycle management services are hosted in AKS. With Azure Logic Apps, we've created a no-code workflow builder called Maestro, so customers can easily automate their workflows start to finish.
Azure brings AI power to important interactions across sales, legal, customer experience, HR, and procurement. REBECCA DENMAN: One of our customers saved 70% in time with IAM. KUNAL MUKERJEE: We've entered a world where agreements come to life and adapt to business demands.
With the help of Microsoft and AI, Docusign is building the future of Intelligent Agreement Management today. OLIVIA SHONE: So the star of the video. KUNAL MUKERJEE: All right.
Thanks, Olivia. [ APPLAUSE ] Thanks, guys. So I did have a role other than in that video, so I'm the architect of that system.
And we'll be talking in depth about some of the areas of that system, especially pertaining. to Azure technologies, and then you'll see how we were able to leverage those technologies and get into a deeper discussion. So with that, let's get rolling.
So as you guys are aware, Docusign has innovated in the past, and many, if not all of y'all, would have actually used Docusign to sign agreements at some point or the other. Maybe to sign away lots of money as you bought a house or a car, maybe to secure a job appointment or an appointment. So especially as the world moved through COVID, this kind of became a standard way of agreements being signed and basically people moving forward from that.
And throughout COVID, maybe this was the only way to do it, because it was the old way of people going into the office and signing for real on paper just went away. But as this thing became really pervasive -- and you know what I'm talking about, because you guys must have gone click, click, click, done, like our initials and done. But guess what?
After you do that many, many times, then the companies that are our customers, and there are 1. 6 million of them, and if you're looking for a number for the actual number of signers, there's more than a billion. Think about that.
That's a lot of agreements. And when you have that many agreements stacking up, you get into a different problem, and the problem is all about scale. And ladies and gentlemen, you'll see me or hear me repeating the word scale many, many times, and this is the reason.
Because this whole problem that got created as sort of an aftershock of how easy it became to sign agreements was to do with finding out how many of your agreements are coming up for renewal, by what date, and what are the renewal terms. And in many cases, these things are buried deep in the agreements themselves. In many cases, in PDFs.
Some cases, even on paper, that needs to be OCRed and brought in. So that is the enormity of the problem that was facing many of our biggest customers, and that was a problem that they brought to us. There wasn't another solution.
There was no solution at the scale that they needed to solve this problem. And as Deloitte have coined it, it's being like this agreement trap that even governments have fallen into. Because some large entities like that pretty much have standardized on Docusign or on digital signatures and agreements.
And now they have all of this very important stuff, but buried in these documents that they have no way to act on in a timely fashion. Now, this is a very large number. And it's a legitimate question to say, "Hey, I'm a little skeptical about this.
Like, really? Two trillion dollars? " So I'll give you just two concrete use cases, if I may.
The first one, you know, many of us have been in a situation where we have a lease on an apartment or some kind of a rental thing, or maybe a car. And it has an agreement that you signed some years back that says that, like, if you don't give a notice of two or three months, then it automatically rolls over. And maybe you've fallen into that.
I have. So that is one kind of a thing. The other one, I think maybe being, you know, Microsoft in this room, you know, enterprise customers or enterprises that do business, in many cases have renewal terms that are even harder.
So for instance, you sell, you know, something from Azure, maybe, and you don't anticipate that the enterprise will just wholeheartedly adopt in the first year. So you set a discounted rate for the first year, anticipating that a smaller percentage may like, roll forward in the first year. And then you decrease, like, the discounts in the subsequent years, hoping that the enterprise will roll out fully by, let's say, the year three.
But what if you forget to actually decrease those discounts? At Microsoft scale or some of our larger enterprise scales, this could translate into millions, if not tens of millions. And this happens a lot.
This is why this number is actually correct. And it's a staggeringly large number. There was no solution.
So what you saw on that video are the first steps that we've taken to solving that. And so what does this look like? I think maybe the video went a little quickly, so I will pause here on a few important pieces of the user experience just to bring this alive.
The first thing is, like, if you are already an e-signature user or a customer of ours, then you would, you know, you would see this new thing called Navigator on the left rail. And the other thing that's interesting to see in this slide or in this screenshot is that some of the agreements that you know and understand very well, because they came from you, are now available on, you know, on that pane, but with these peculiar purple accents. And what is that?
And that is really where the action starts, is that these accents, these purple decorations, signify that the AI has done something important about these documents, and they have pumped out some of the important information that was buried deep inside, and they have basically elevated it up into metadata. And now that metadata is available to glance at, to query on, and to also set up notifications, reminders, things like that. And so here you're seeing now, let's say you clicked into one of those documents from the previous slide.
This is what you would see. You'll still see the original, you know, document in all of its glory, in the PDF, and you could scroll up and down. Now it's all searchable as well, so you could do full-text search.
But most importantly are the new things that you see on the left rail, because now it tells you, you know, what is the type of this. So you say, "Ah, okay, this is like, this is an MSA. I know all about MSAs.
" I have other MSAs maybe that are outstanding that I need to act on and so on. So you can plan better. It tells you what are the parties that are negotiating this agreement.
And then most importantly for many of these use cases, like I said before, is the renewal dates. And then these dates actually feed into notifications, but also renewal terms. Because if you know the date, you know the terms, maybe you go back and re-engage prior to the renewal, and you try to, you know, get a sweeter deal, or you try to explain the circumstances and work better together.
So all of these things are very important to users. And then, of course, like I said, like the things that are really like the money shot is all about like renewals, renewal terms. And also, as we are finding out, as people are starting to use this and give us more feedback for what would they like to see next, a lot of it has to also do with contracting parties.
So if you've made a deal before and you want to go back now, you know, maybe you want to shop around. Maybe you want to ask them, like, "Hey, time has passed. Maybe things have been commoditized in the meantime.
Can we get a better deal? What about volume discounts? " This kind of thing.
All of these conversations can come to the table. And in many cases, these can be mediated with AI. So now we're going to start to look a little bit under the hood, because otherwise there's not much point in an architect talking about this stuff.
So we will talk a little bit about some of these things. But as I start out on this architecture journey, I wanted to point out the founding or the foundational pillars, if you will. One I've already talked about.
Scale. It's all about scale. Because, you know, if we couldn't solve this at huge scale that our enterprises need, then that $2 trillion slide just goes unanswered.
And the other thing that we found is that because we want to get into many verticals very quickly, almost simultaneously, and you found mention of some of those on the video, then we need a very sophisticated language model to be able to make sense of jargon, of verticals as a specific terminology, for example, in the medical hospital scenarios. Legal, of course, is a big jargon-rich space and so on. So AI became, especially with generative AI and LLMs, became very important to this part of what we needed and why for a long time we kind of knew the shape of the solution but we really didn't have the technology as we have now, or we have now with Azure Open AI.
And then the third thing that might not have been as obvious from looking at the video is that we have a mix of both historical processing, which has to happen at bulk at very large throughput, but also real time, because customers are coming in with fresh agreements and they want to query the system in near real time. Find out, you know, "Hey, what am I getting into? Is this a good deal or not?
What's the history of dealing with these parties? Like, should I be asking for a discount? " All of this stuff.
So that is a real time interaction. So the system has to be good at both. And the fourth one should go without saying, but it's really important.
So I might as well say it anyways. Without extremely high accuracy from the AI, it's pointless, right? Like, why would you go and make mistakes and get egg on your face?
And, you know, I think we all know that before AI really came of age, as it has in the last few years, this was an issue. So now I start to unveil even more, and here you'll find on the left side of the diagram three of our top lines of product. One, I'm going to start from the bottom, is the e-signature, just because it's most familiar.
You guys have all used e-signature and hopefully used it on us, and so let's start there. When you do an e-signature, you know, flow, that needs to be transactional. It's almost like, you know, ATM in a sense, because, you know, the moment at which you sent it out for signatures, the moment at which somebody signed and took it, brought it back to you, these turn into contracts.
These turn into binding contracts. And so there needs to be asset guarantees. You need to be able to roll some of these back and so on.
So not surprisingly, therefore, the first point of touchdown for the incoming e-signature transactions is transactional DB, Azure SQL. The other thing to note in this diagram is that you'll see lots of instances of Azure things. And what I will start to explain is why it's so important is that's how we solved our scale.
At the scale that we needed, these pieces like Azure SQL coming in from e-signature and then what's happening with Azure Data Factory and then later, Azure Databricks, what's going on is that they're adding more structure. As the data is digitized, as it starts flowing into the pipeline, we start to recognize aspects of it which match with entities that we already know about. And so we start to overlay some structure.
And then as we structure it, then we can lay it out in a data lake. And then once it is there, that is when we have all the ingredients in place to call into Azure OpenAI. That's the block that you see in the middle there, which is really the secret sauce.
And that's where I'm going to take you next. And then later on, we'll come back to the CLM flow I'll keep towards the end. And then up above, you're seeing the new stuff.
That's what you saw the video of the Navigator and IAM users. You saw that they were looking and slicing and dicing off of reports. So that's being fed off of Kusto.
And you also see a Cosmos DB over there. Cosmos DB is our schema and metadata store. It also stores the enrichments that come out of the AI.
So then every, like the purple decorations now will make sense, because where does that all get stored? It gets stored in Azure Cosmos DB after we've elevated it up to the level of metadata that may have come in from the envelopes or with the document. So once all of that is there, now we can serve up the queries, the search queries, and also the reporting and slicing and dicing that you saw in the video that the Navigator end user can take advantage of.
And of course, there's some pieces in the middle. There's the API layers, and there's a DMS. DMS is just a document management service that runs on straight up Azure blobs.
And so that's kind of the architecture. I'll come to the middle part with CLM later on. But like I said, AI is the secret sauce.
And so now I'm going to start to get a little bit detailed and I'll soon pass over to Olivia again to get into even more detail. So at our scale, and like I warned you a few minutes back, I'm going to keep repeating the word scale many times because that is why -- you know, it's not a slapdash, you know, startup where we're not showing you some prototype. This is processing like millions of documents every day, even every hour.
So how do we pull this off? So we have to basically drive that pipeline that gets to Azure OpenAI at extremely high throughput, but also extremely high PTU utilization. And the way we do that, like I said, we start to learn more about the agreement as it starts to flow through the pipeline.
We overlay some structure. Now we are able to pick out the most relevant pieces of it with the context that's needed to answer the questions that will fill in more information, like renewal, dates, renewal terms, what happens if, and so on. So we package these up into snippets.
We marshal them inside of like what we call meta prompts. And then we shoot that off into Azure OpenAI so that we can get the most bang for our buck. We can drive optimal PTU utilization.
And then it comes back as extraction results. And like I showed you on the previous slide, we go and store that in ADM, which is an agreement data model, and Cosmos DB is the store for that. So with that, I'm going to pass it back to Olivia to tell you a little bit more about our secret sauce.
OLIVIA SHONE: Thanks, Kunal. [ APPLAUSE ] So I get to tell you about just one piece of this very complex architecture, but it's a piece that a lot of customers want to talk about, Azure OpenAI service. And it stems back to this partnership that we have with OpenAI that is a much televised partnership.
It's certainly one of the ones that has attracted the most news recently. And it stems back to even before we started serving Azure OpenAI service to customers, OpenAI was training their models on Azure. They came to us because we can scale.
Similar to as Kunal was saying, Docusign is able to scale quickly using Azure. That was a similar reason to why OpenAI selected Microsoft in the first place and runs everything through Azure, trains their models in Azure. And it's been a really wonderful partnership where we can ensure or help them ensure that artificial and general intelligence benefits humanity, but also fulfill the Microsoft promise of empowering every person and organization on the planet to achieve more.
What you see when you combine these two into Azure OpenAI service is a wonderful combination of these cutting-edge models that come from OpenAI. And they truly are cutting edge. They've been able to land really impressive models that have really changed this AI story, this era of AI that we're all experiencing.
And we're able to combine it with what we at Microsoft offer through Azure, the secure data and built-in responsible AI, this global availability. And we're able to offer all of the models you see down below, which are what you can get from OpenAI, from A1 preview through to DALL-e 3, through to the Whisper models that are complementary to some of the other speech models that we offer. So we have some commitments to our customers from Azure OpenAI service.
The first thing you'll see here is that we do commit to offering all of the models the same day as you're able to access them from OpenAI directly so that you can trust that you are getting the latest and greatest. The second part is the flexibility in those offerings, and I'll share a little bit more on that next. And similarly, with your data being private and secure.
This fourth point about safety built in, we have a product called Azure AI Content Safety that is really game-changing in terms of making sure you're able to deliver on your responsible AI commitments, and a lot of those content filters are baked into Azure OpenAI service. They're just on by default. And across the Azure platform, we take it very seriously, our commitment to responsible AI.
There's an entire Office of Responsible AI at Microsoft that we work really closely with. And that continues to be really critically important as we continue to build AI into our portfolio. The next point is about enterprise promises.
And this gets to a lot of what Kunal was talking about in terms of making sure we can scale, making sure we have all of the other tooling that you need to fulfill those enterprise promises when you're processing millions of signatures per minute. And the developer-friendly integration with different parts of the platform. And then being able to offer model choice, of course.
We've got over 1,800 models, and being able to give customers that choice as well. So I mentioned the deployment options. So one of the things that customers are excited about is using the standard, just paying per token, as you need it, use Azure OpenAI service.
We recently introduced provisioned options, which means we'll put capacity aside. You get your throughput over here. You know exactly how much you're going to spend.
And we introduced this batch processing as well. And it really allows you to optimize depending on what your use case is. And then you also see at the bottom there data zones.
What we did there is we offered the ability to keep all your data in the US or all your data in the EU. So for folks that have those kind of regulations where they need to make sure all of their data stays within the EU, but they don't really mind where, they just want to go where the best capacity and the best models for their use case are, they can just guarantee that it's part of the EU. Similarly in the US, some people need it in specific geos, which is where you have the regional availability.
Some just need to know that it's in the US, and other than that, they're flexible. So really being able to offer customers the flexibility to deliver as they need to. And then this is the last slide, but it's really important that we do guarantee when you are using Azure OpenAI service, your data is your data.
If you are fine-tuning and optimizing your models, that is for your exclusive use. If you are Coke, we are not going to share your information with Pepsi. It is very much your own data is your data.
And I think this question came up originally because of this really close partnership with OpenAI that we have. People were worried we were giving them information, and we are not sending any information to OpenAI. Not only that, we're not using customers' information to train our own 1P, our own first-party products, like you'll see a lot of wonderful AI built into M365 Co-Pilot or PowerPoint.
We're not using customer data for Azure OpenAI service to train any of that. So you can rest assured your data is your data. It's not being fed back to OpenAI.
And being able to really offer that kind of confidence to customers is really important to us. And now I'm going to pass it back to Kunal to share a little bit about why Docusign selected this offering. KUNAL MUKERJEE: Yep.
Thank you, Olivia. So pro tip about these technical presentations, don't bring up privacy and data residency as questions. But as I've done here, I've let Olivia first lead with the answers so you guys know that, like, all the bases are covered, especially when it comes to data privacy and residency and things like that.
And now I will reveal that without that, it would have been a no-go for us. Because, of course, in this day and age, like, that's the first thing that people, all customers, want to know. Because as you can imagine, these are business-critical contracts, in some cases, like legislation, that type of thing.
Like, corporate legislations and things like that are in these agreements. And so privacy, those kinds of guarantees are paramount. So they would have been, like, literally a no-go for us.
And that was one of the reasons as, you know, Azure not only implemented these kinds of guarantees and safeguards, but was very transparent. They have, like, some of the most beautifully-drawn architecture diagrams as to how the data is secured, things like that, made it really easy for us to explain to our customers that, "Hey, this is, like, we're not just coming up with a slapdash solution. " These have been well thought about.
This is the architecture diagram from Azure OpenAI for how they secure data. And in cases where there's no opt-in, it literally doesn't go anywhere, like she just explained. So really, really important.
The other thing I wanted to mention is fine-tuning. So as you guys know, you have the baseline LLMs, but then you can also fine-tune, you know, 3. 5, I think, Turbo as well.
And there, again, the privacy of the data that we use to fine-tune becomes very important. But once we've fine-tuned it, then the LLMs really have this domain knowledge schlepped on as well. And so now they can do, like, even more fancy things and offer up more customer value.
So it's very important like, why did we choose Azure OpenAI? There are these facets of it, like data privacy, the fine-tuning, the agility with which the team has been bringing -- not just bringing new APIs and making them available to us, but also optimizing them. Like I said, again, back to my, you know, broken record bit with the scale.
So the speed of innovation, the speed at which the APIs are maturing are big attractors for us and made this choice easy for us to make. So now we'll go into some of the other areas of the architecture. And as you guys will recall, the whole story, in a fashion, starts for us from the transactional side.
And as you can imagine, we've been at this for a long time. And so, like, I just tossed out a few numbers earlier. Like millions of documents coming in a day.
You can imagine over the years, the database is humongous. And as you're well aware, for all of you who I'm sure have had to deal with very large and fast-growing databases, this presents multiple challenges. Of course, supportability, maintainability, if you ever have to do migrations, and security as well, those kinds of things become harder.
But also, like the query efficiency, the query latency, throughput, these things are very sensitive to skew. And so then what do you do? And everybody knows this.
Horizontal sharding and vertical sharding, so as to equalize the load if you can, or to the extent you can. And so we do both. Not surprising, we have vertical sharding that is scenario-specific, and you see some of these instances here, like with branding, with usage, and so on.
And then for horizontal, we use some partition keys, like a counter ID, that kind of thing. And with that, we have managed to hit the right sort of balance in how do we lay out the database topology for optimal performance. And not surprisingly, now it's getting a little bit into the weeds of how does e-signature work, and a lot of it comes into envelopes.
And so typically, there's an envelope ID that turns into a shard key. We have also experimented and found some nifty ways to pack in, like, inside of a single grid, envelope ID, and then we also combine in some other information. And that helps us to partition these databases nicely.
And here, getting into Cosmos DB, and if you guys recall, I pointed out earlier that SQL is where everything lands initially as a transactional store. But Cosmos DB is a NoSQL solution, which has the structure, it's like our schema store, the entity store. And there, we use some other things.
Like, we've followed slightly different strategies for SQL, scaling out SQL, scaling out Azure Cosmos DB. On the Cosmos DB side, we have a different type of data. There's account IDs and all of that, but there's also entity IDs now.
So getting into some of the nitty-gritty of the data model, like I said, every vertical tends to have their own agreement structure. As you guys have some knowledge of this, SOWs will look quite different from MSAs, will look quite different from RFPs. If you guys have dealt with those kinds of agreements, will look completely different from HR-type agreements and so on.
So that knowledge, that information is carried inside of our agreement data model, which is housed inside of Cosmos DB. And again, going back to my broken record bit, but scaling for good performance is the name of the game. Once we could have nailed this, we had a solution, we could come out to market as we did.
So like, we have leveraged dynamic autoscaling for Cosmos DB. That was very effective, is what we found. As you can imagine, agreements tend to have spikes, seasonal spikes.
So like, Q4, end of Q4, this is when a lot of agreement scanning and processing happens. And at other times, the fact that autoscaling can draw back the footprint, you know, saves us a lot of money. So this has been very useful.
The global aspect, I think, Olivia touched on, very important for us. We are a global company, and Azure has a global footprint. And, again, like, it was like the third one should not come as a surprise to anyone who has used these databases at scale.
It's really like a, you know, Swiss Army knife. You know, it's got so many configurations, and if you just are patient and you work through your workloads and the configurations that make sense and experiment with some other ones. The teams have also been super helpful, very responsive, and always taking good questions and come back very quickly.
And so, that has really helped us solve this problem at scale, very elegantly, I feel. And, of course, you know, we try our best to follow the NoSQL data modeling best practices that are published. The other thing that I think is a game changer is actually the AI, as an advisor, to also point us to relevant, pieces of information, community, feedback, portals.
All of this comes together really nicely because now there are these AI companions as well. And with that, I'm going to turn it over to Deborah to talk about some of the innards of the SQL side of the house. DEBORAH CHEN: Cool.
KUNAL MUKERJEE: SQL and Cosmos DB. DEBORAH CHEN: Cool. All right, thank you so much, Kunal.
It's really awesome to see how customers like yourself have used Cosmos DB and Azure SQL DB to build these really highly scalable multi-tenant applications. Let's dive a little bit more into Cosmos DB. So like Kunal said, it's a globally distributed NoSQL database.
And at its core, the thing that makes it really great for scaling is that it has automatic horizontal scaling and partitioning built in. This means that if you're building any multi-tenant systems or you're starting on day one, onboarding new customers, get to petabyte scale of data and beyond, Cosmos DB scales seamlessly with the storage and throughput as your workload grows. It does all this while maintaining five nines of availability, and it also has SLAs around latency, throughput, and availability.
More recently, we've made a lot of investments in the product to make it really shine as a vector database. This means that if you're building a generative AI application, you can actually take advantage of the fact that you can store your transactional data in Cosmos DB together with your vector data to get the most relevant results for the apps that you're building. We'll take a quick look at some of the announcements that we just announced here at Ignite.
Vector search is now generally available in Cosmos DB with some proof improvements and some compatibilities with the Cosmos DB surface area that weren't there previously. But I really want to highlight that we've also now made generally available the Microsoft DiskANN suite of vector indexing algorithms, which is the same technology that powers things like Bing's vector search. This is really the most cost-effective, most efficient way to do vector search across high volumes of data and at large scale.
For those of you who are building search experiences in your applications, like maybe like product recognition assistance or "find me all my documents that contain this keyword", full-text search is now available, which expands the Cosmos DB query surface so you can search across text in all of your documents. You can also combine this with hybrid search so that if you want to take results from both vector search results and full-text search results, combine them, re-rank them, you can get even more relevant results for the app you're building there. On the SQL side of the house, we have some new announcements in SQL, Azure SQL Hyperscale as well.
You can now store up to 128 terabytes of data in a SQL server database. The log throughput rate has also increased 50% to 150 Mbps, which should hopefully help out those of you with write-heavy, high-adjusting kinds of workloads. And finally, there's a new feature called continuous priming, now in preview, that basically primes the secondary replicas, keeps them aware of most frequently accessed pages, so in the case of a failover, they're primed and ready to go, which helps with performance during those kinds of failovers.
And finally, we're also very pleased to announce that the Microsoft Fabric Mirroring experience for Azure SQL Database is generally available as well. With this feature, it's a free, easy, efficient way for you to take your new updates coming into SQL Database in your data, get them incrementally updated in Fabric OneLake, where you can do things like real-time analytics, Power BI visualization, and so on, all with no data movement or ETL. Okay, feel free to come by the booth if you want to ask us more about any of these updates.
And with that, I'll turn it over to Kunal for the last part of his architecture to talk more about how Docusign uses logic apps to orchestrate their workflows. KUNAL MUKERJEE: Thanks, Deborah. DEBORAH CHEN: Thanks, Kunal.
[ APPLAUSE ] So that continuous priming one, I have to get back with you because I think that'll save my DBA team like tons of headaches and complexities, so that's really very exciting to hear that that's coming along quickly. The other thing that I think I should mention, which I have not been hiding, but it is really important, is that when we as customers, when any of you guys as customers of Azure sign up for an Azure service, then of course there's a functionality, and that is what gets displayed at places like these and in PR and videos and things like that. But there's a very, very important ingredient that we're all buying into, which is the team that stands behind those services.
And what we have found is that the teams are extremely responsive, of course extremely well-informed and really steeped in customer-first culture, and that has made this entire journey, I think, not just possible or getting, like Olivia, mentioned being able to scale quickly and get these products out to market, but just being able to sleep easy, knowing that there's these service teams standing behind these services and maintaining them at extremely high availability and quality. So with that, I've got to get into the last part, and as you guys may remember, I kept this one intentionally for the last. So here you're seeing another of our three main lines of product, product lines, CLM, contract lifecycle management.
And for that, the workflow is paramount. And for that, we have adopted Logic Apps as a universal workflow substrate. But let's come back to why is it so important?
You say, "Kunal, why is e-signature -- and then you talked about Navigator and the IAM, and that kind of makes sense, but, like, why is workflow so important? " And the reason is because, as you may think about, like, when does signature happen? Well, it's kind of midstream.
If you think about the whole flow, it starts with people coming together, maybe discussing a potential deal, and then somebody starting to write that up, that gets iterated, that gets reviewed, that gets approved, in some cases it goes back, people add more addendums, especially in the legal domain, as you can imagine, this process itself goes on sometimes for months, and then finally signature happens, but that's still not the end. Now you have to go into fulfillment, in some cases, there's delivery of goods and services, and then there's payment, you know, it's called quote-to-cash, it's sometimes how we characterize the end-to-end. And then, after that, the agreements sit in some repository, but as we've shown, hopefully, earlier, made the case for the $2 trillion agreement trap, you better have a system like ours, not just holding the agreements in perpetuity as a dead piece of paper, but actively informing you when it's time to renew, you know, what should be the renewal terms, things like that.
So that's the entire flow, and that's why workflow is so important, and that's why Logic Apps is so important for that white piece sitting in the middle. And so, you know, you can see the flow there, there's not much there, but the reason Logic Apps was found to be really great is because, again, scale, no surprises there, it became very easy for us to have a visual way of wiring these together. It shouldn't come as a surprise to those of you familiar with things like Power Automate, same principles, except with, you know, Docusign gestures that enable us, enable our users to wire these systems of workflow up very quickly, test them, and then put them into production.
So with that, they're able to really unify everything to do with agreements, storage of contracts, and all of that, in one sort of system. And Logic Apps itself came with many nice things that were great for us, you know, such as, like, a highly scalable performance workflow, and we've seen other aspects, like global footprint, things like that. We have based our fabric, if you will, our workflow fabric called Maestro on that.
You know, not surprisingly, the Maestro conducts an orchestra, and so, in some sense, that's the metaphor that we're going for. But then we also built some things on top of Logic Apps, like you're seeing in the last column there. So we implemented things like instance-level pooling, you know, that was very easy for us then to do things like billing attribution, but also make really good utilization of the workflows themselves.
It helped with, you know, spin-up latency, startup latency, things like that if you're able to group like workflows and then pool and reuse them. And things like rate limiting, instance limits across Logic Apps, and then alerts on user spikes. These things just made it easier for our teams to maintain.
It reduced the toil on our DRIs and on calls. And so, with that, we felt very confident that we have a really good workflow substrate, but also we have the things on top that made for easier manageability, debuggability, and performance. So with that, I think I turn it back to Deborah to bring it home.
Yeah, but please stay on stage, Kunal. KUNAL MUKERJEE: I'm staying on stage, okay. DEBORAH CHEN: Thanks.
Before I get into the Q&A, I just want to plug a few resources you can check out. Some customer stories, our Cosmos DB blog as well. There's some more sessions that are related if you want to learn more about AI or any of the databases used.
Specific plug tomorrow, if you've been wanting to get hands-on and actually build your own NoSQL Copilot chat app with a RAG pattern, check out our lab. That's tomorrow morning on Friday. You can actually do that end-to-end with the code samples.
Finally, if you're a customer of Cosmos DB, feel free to stop by our booth. If you want to leave a review on PeerSpot, we are offering $50 gift cards if you want to take a look at that. But otherwise, I'll turn it over for Q&A, and I think Kunal will be happy to take any questions.
KUNAL MUKERJEE: Yeah, or any of us. DEBORAH CHEN: Yeah, or any of us. KUNAL MUKERJEE: Questions for Olivia or Deborah or me.
DEBORAH CHEN: Yeah. SPEAKER 1: For Cosmos DB, are there specific SKUs that contain the new Gen AI features? OLIVIA SHONE: Yeah, so for Cosmos DB, so unlike SQL Server where you're picking like a business-critical mission, there's not really a SKU as such.
It's just the Cosmos DB. So there's a provision throughput model where you can set the performance level you want, and there's a serverless model. Both of them work just fine and have all the AI investments that we've just announced.
Yeah. SPEAKER 1: So any of the APIs? OLIVIA SHONE: So for NoSQL API is where a lot of those are, and then if you're using the Mongo vCore, flavor of Cosmos DB, the investments are there as well.
SPEAKER 1: Okay. OLIVIA SHONE: Yeah. KUNAL MUKERJEE: Yep.
SPEAKER 2: Question for Kunal. KUNAL MUKERJEE: Yeah. SPEAKER 2: I was interested in the use of Azure AI Search in the pipeline.
I saw that reference. Can you elaborate on that? KUNAL MUKERJEE: Yeah.
So in that particular pipeline, we are not really -- okay, so there's a different thing we're doing which leverages RAG, and that is where we actually use AI as part of the search, but in the Navigator video that you saw earlier, the pipeline is left-to-right, and it is only enriching the agreements with things that were locked inside by deciphering the vertical-specific jargon and other terms, especially renewal terms, obligations that parties are entering into. Things like that are very important, as you can imagine, especially after agreements have been sitting around, maybe something has changed about the company, M&A, this sort of thing. Following those types of big events, being able to glean all of that information very quickly at scale is something that enterprises were literally bleeding money about.
So enriching agreement after agreement at ingestion time is what we're using Azure OpenAI for principally at the moment. But going forward, we are very close to releasing that flavor that you're talking about where people come in and they have some questions for which there is the LLM which has the world data or the world knowledge, if you will. But then by using like an RAG, we can also make it privy to the user-specific or the domain-specific information.
And thanks to what Olivia talked about, the privacy and the safeguards around that, we can do it securely with like all of these guarantees. So that's coming, but not yet released at the moment. Yeah.
Yeah. [ INAUDIBLE ] DEBORAH CHEN: Sorry. Could you repeat the question, please?
[ INAUDIBLE ] Oh, for SQL. Are you asking about if SQL Server supports Vector Search and -- [ INAUDIBLE ] KUNAL MUKERJEE: SQL Azure? Yeah, again, like I don't -- so they have just one SQL Azure that we consume.
And if you're talking about the machine SKUs, are you talking about the machine SKUs? Like do we use large machines or -- [ INAUDIBLE ] Yeah. Yeah.
Yeah, so we -- I'll answer it. I don't see anyone from Docusign Legal here, so that's fine. Yeah, so at one point, we did go up to the -- like we maxed out the SKUs.
And then then if you remember, I talked about both horizontal and vertical sharding. What we found out was that their top SKUs, you know, was kind of solving the problem, but we were throwing a lot of money at it. So then we reduced the data SKU and we made it like a better distributed architecture.
And then we were able to go down SKUs. So we saved a lot of money like that. But the software itself, I don't think, has multiple SKUs.
It's just like SQL Azure. You know, it just keeps going up, right? DEBORAH CHEN: Yeah.
KUNAL MUKERJEE: Yeah. Okay. DEBORAH CHEN: Yeah, feel free to come by the booth if you want to chat more.
Looks like we're at the end of time. So thank you so much for attending this session. KUNAL MUKERJEE: Thank you.