[Music] great welcome thanks everyone for coming my name is DeWitt and my name is Ryan and we're product managers on the Google cloud team before we get started today though I wanted to sort of pull the audience mostly because it's this afternoon and probably feeling a little sleepy so how many of you are running kubernetes in your organization today or planning to run it well that's pretty good we had a really good joke and I was gonna say you with your hand down wait six months you're gonna put it up I don't think we get
to use that joke so and then how many of you are running server lists today we're using a server list platform good you all came to the right place all right well this is gonna be exciting so today we are going to talk about two things we know you're all excited about that's true brunette ease and serverless and we know that because you just put your hands up and the talk is called kubernetes so ever listen you and you came but also we're going to introduce two new things that you're just learning about today one
is the server list add-on for google kubernetes engine and the other is Kay native the open source project that powers it and a lot of people worked very very hard on this and we're gonna try our best to do it justice but if we trip up just applaud really loud right then so let's get started so this is all possible thanks to kubernetes so as you know kubernetes is the open source orchestration layer for managing and deploying containers I announced only four years ago kubernetes is now supported by every major cloud it has taken off
like wildfire every day thousands of people contribute to the success of kubernetes allowing real businesses like yours to abstract away the underlying infrastructure and in fact it's that power of abstracting away the underlying infrastructure that has allowed us to think of infrastructure as a smooth paved surface available wherever you are in your Center or in ours but that's only half the story the other half of the story for today is serverless which is fundamentally changing the way the developers approach building applications some of you in the industry may think of services only meaning functions but
there's so much more to the promise of server lists than just running snippets of code in the cloud in fact we at Google see serverless not just as compute but as the whole stack that allows developers to focus on what really matters that core nugget of business value and not have to be concerned with the complexities of the infrastructure that runs that code in fact Google has been working on serverless for more than a decade ten years ago almost exactly April I guess we launched App Engine the very first platform is the service and the
very first Google Cloud Ari one of the first platforms the service and the very first Google cloud product since that time over the last 10 years we've continued to innovate providing a number of additional service solutions and today we were very happy to announce the general availability of cloud functions that is that's super exciting but there's so much more to service this this is just a journey that's still beginning and I'd like to unpack a little bit what the promise of service really is what do we see it as now at Google we like to
joke that serverless is about no servers and obviously there are still servers running a data center somewhere but it's about making sure that developers don't have to be concerned with what those servers are well that infrastructure that underlies your application really is if you want to you know bring a container deploy a function or just an application you should be able to do that regardless of sort of what the underlying platform is that it will run on it also means being idiomatic which is another google ism I think which is really to say that it's
native to you as a developer it's meeting the developers where they are so if you're a go developer you should just be able to be a go developer you shouldn't have to learn a bunch of quirks in a system if you want to be a node.js developer just be a node.js developer don't worry about how to integrate with the platform and then the third promise is about being event-driven today if you want to wire up a bunch of different systems you typically end up hard coding those systems together putting all of that logic into your
code instead of being able to declaratively bind things through an eventing framework and we at Google see one more promise here which is that server loss relation should be keeping you free of vendor lock-in by adopting the promise of server lists you should be able to take your code and run it wherever you feel most appropriate if that's in an on-premise data center a third party data center or in the cloud you shouldn't have to choose you should be able to to keep using the framework that you're using so kubernetes isn't service if you think
about it are kind of two sides of the same pursuit two things that go great together like chocolate and peanut butter unless you happen to be allergic to chocolate or peanut butter and then it's absolutely nothing like that but what kubernetes does for the operator dealing hiding away all of the sort of complexities of the underlying infrastructure that you're running on and allowing the smooth paved surface service does for developers allowing them to not be concerned about the platform they're running on and just focus on the core you of the code that matters and in
fact we at Google believe in this so strongly that this is the future of the way that compute should be run that this morning we were happy to announce the gke surrealist add-on which brings all of the wonderful operator friendliness of kubernetes and all of the developer friendliness of service together into our Google cloud platform so the benefits of using the service add-on number one it's easy to start deploying with less code today if you want to deploy an application on kubernetes you have to deal with a bunch of configuration you have to write all
of that and check it in the service out on makes this much much simpler and I'll show you how in a little bit it also makes it easy to run a server list style workload being able to go from source all the way to a deployed container with a URL in minutes it automatically deals with all the network programming all of the ink provisioning ingress and all the different layers inside kubernetes that you used to have to deal with before and it helps auto-scaling and you might be telling yourself hey kubernetes already auto skills why
is he talking about this in context of the service add-on and you write kubernetes does scale but the service out on makes it scale better and it makes it scale in a way that kubernetes doesn't today we can actually scale your workload all the way down to zero which lets an operator in your company pack more applications into the same cluster resources and have everything continue to run really well so now that we've introduced the server Louis out on let's actually show it to you in action so we can switch over to my laptop I'll
be very happy good so the very first thing that we need to do is stand up a cluster with the serval is that uninstalled and this is an alpha that we're talking about today so you'll see I'm using the alpha parameter in G cloud but we're just gonna go create a new cluster and enable the service that on and you know if this works ah there we go so I got a couple warnings about defaults that are changing but that's fine meanwhile I'm gonna go switch over and use the cluster that already exists so that
I don't have to you know sit here and watch that little things bin for a second or two I always like to get started with a new platform by taking a look at a hello world so I have a simple hello world application that I've written in go it really doesn't get any simpler than this if you can come up with simpler hello world let me know and then I created docker file because the way that this works today in alpha is that it requires me to have a docker file and deploy a container so
this is just a very idiomatic or native docker file in fact I just searched for a docker file for Galang and got this and used it so how easy is it for me to deploy this on my own box it's idiomatic I should be able to test it and run it here right so if you're familiar with docker I can just do darker build and I'll tag it with a particular container repository so it's gonna go do a little build it's gonna produce a container for me this went a lot faster in rehearsal there we
go and I'll just run that container real quick so that you can see what this application does you'll see it up puts a little bit of log and if I go here oh you guys should I cut what I did wrong come on I need to expose the port in docker to make sure that this actually works alright so they're very simple just hello world welcome to gzp next hey it works man if I could get you guys applauding at the fact that I took go up and built it in docker I am I'm sad
this is good okay but now let's do the same thing and actually run this in my surrealist iron cluster so I'm gonna run the G cloud server list command and I'm gonna say I want to deploy a service and I'm gonna specify the image that I built and if you're paying attention you'll notice I didn't push this but that's just a save time I pushed this image earlier so G cloud is gonna ask me what do I want to name the service and it provides a nice suggestion of hello world I'll take it and now
it's gonna go out and do all of the lightweight provisioning that's required to stand us up inside the service add-on and hopefully this doesn't take too long because again in rehearsal really fast done cool all right so now I have a URL which is actually out there in life and if I hit this you'll see hey there's the same sample running out of my kubernetes cluster and you'll notice I didn't have to deal with anything in Korea so I didn't use cue cuddle I didn't create a manifest I don't look at ya mole but let's
actually go look at the mo and see what happened so the circle aside on installs of UCR DS that make all of this possible the first one I'm gonna look at is the service CRD so I'm gonna do cube cuddle get service dot serving if you have feedback on the name so what this is gonna do when it finally loads all right there's just my hello world sample I'm going to dump the mo real quick just to show you that there's actually and will behind it and point out a couple things there's a bunch of
status conditions on how the service has changed but simply here's the spec that I got created so it defined a concurrency model for me that my container supports multiple concurrency and it just points at the container image really really simple thing and the you know it has the domain for the way I'm being served from and the revision so what's a revision well the revision is actually another one of the CRD is to be published and revision is the immutable state of my application so it's tying all of the configuration details and the actual container
image together into an object inside kubernetes that describes it and as i continue to make changes it continues to snap new revisions so it's easy to roll back or to split traffic between multiple revisions or deal with these sort of multiple deployments that you want to and then the last object that I'm going to show you is the route and what the route does is it is another CRD that just connects between the outside world and the revision that's running inside my container in this case it's using HTTP to route from that hello world domain
that was published to the particular revision and it's always pointing at the latest revision by default but if I wanted to point at a particular revision I could do that as well all right let's do a little bit more advanced example so this time around I have written an actual guestbook where you can post a message and see it online using cloud datastore as the backend and it just exposes too little api's in a very go friendly style of doing that and it serves some static content using the HTTP file server object but this time
around I'm going to pretend that I don't have docker installed and I'm gonna say hey I have this code how do I actually go about building that code and having it run on the service so again I create a darker file a very simple one that installs some dependencies and this time around I am going to get in and build some Yama so I meant to define a service like a like g-cloud did for me automatically and I'm gonna say that my source code comes from this particular git repository and I want to use the
master branch and then I'm going to use the Conoco build template that's baked into the service add on and have it build that code for me and push it into this container registry and then I want it to deploy that container into my code and so I'm gonna do Q pedal apply and if I go and get my pods you'll see there's this new pod that isn't the deployment pod that's actually the build pod spinning up and I'm gonna go out and take a look at stack driver where of course because this is integrated to
the Google cloud stack drivers automatically plumbed through as well so I can see what's going on and I'm gonna search for build in my logs hopefully there we go alright so here's a bunch of build output and you'll see it actually ended up pushing the container now yeah ignore the fact that everything shows up as a warning like I said this is an alpha there's a few quirks to work out that might be one of them all right if I go back here get my pods again the build is done electricity I now have a
deployment and I can go to my guestbook application and if everything works right I should see some jespah countries here what I wanted to see oh wait all right I see why I jumped the gun it wasn't actually running now it's running let's try again ah there we go no it actually works and just to prove it I'll put something and you'll see a show cool now that's me let's see what happens if I actually throw some load at it so in this window I'm gonna use watch cube cuddle get pods and you'll see right
now I have a HelloWorld pod and I guess book pod and I'm gonna run a little script that's gonna throw 10 queries or 10 threads and 2050 queries at this thing and you'll see that the service ion is responding to the load by scaling out the containers that I have here and it's gonna make sure that all of those requests are satisfied as fast as possible by spinning up as much capacity as it can inside my on my cluster so that's really exciting the last part of this demo that I was hoping yep there goes
so you'll see the hello world demo is now actually spinning down that pod is terminating because nobody in the audience actually went to that URL and so it's timed out it's hit the timeout limit and now that pot is gonna go away and so you can see it's actually saving me resources inside my pod or inside my cluster rather because of that um so this is just a really brief introduction to what the service add-on can do there's so much more baked in here and we're gonna we're gonna tell you a little bit more today
but tomorrow at the same time slot actually is a very deep technical deep dive into the underlying technology on this that you should come to you and you can learn all about all of the intricate details in here ok if we can switch back to the slides so just to recap the demo really quickly like we went out and we turned on a cluster inside kubernetes that deployed the service out on so I didn't have to load a bunch of manifests or anything to get all of this stuff installed it was just there by default
inside my cluster or there when I enabled we then deployed a hello world up from a container using the g-cloud command line to make it really easy and then we built source actually on the cluster safely contained inside the cluster so nothing had to leave my cluster by doing that and then we showed how it scaled up and down and then events is another big piece of this and like I said I didn't have times in this demo to show off events but come tomorrow and we'll talk all about how events work inside the space
as well but you know it's great for me from Google to stand up here and tell you how awesome my technology is but you shouldn't just take my word for this so I'd like to invite ROM from t-mobile up he's the principal technology architect to tell you a little bit more about how tmobile has been using the technology inside the service ah Don thank you Ron [Applause] first of all huge congrats on the on the milestone so we're really from all of us at t-mobile huge congrats to you guys so choice of running the workloads
in different clouds is important for us so when the Google team asked us to test drive the server let's add on we can be more excited so what I want to do in this next couple of slides is talk about a use case that we picked to run to test drive the the server Lucero so basically what you've seen here is is we have this experience called store locator pretty much as you can guess this is used by our customers to to see what stores are nearby and so they can call the store or even
get directions so they can go out to the store and buy buy services so just to tell you a little bit about the service that work you know that's in production today it's basically a java application that is containerized and running in in our mesas marathon container platform we you know not only we had to build the service we also had to write code to actually collect metrics send the the metrics to a time series database scaling is largely manual because we we get alerts from Ravana and then we create dashboards to actually to monitor
and then when there's alerts kicking in we go in and manually scale it the other part is it's always running even when the service is not used it's always running which means it's taking resources from the cluster so you know which is you know those are some you know if it's not used why take resources right you could use it for other workloads as I think Ryan and talked about previously the last bit here is there's a lot of investment that's gone into CI CD to get you know proper pipeline wired up to go from
dev to all the way into production so so we thought this was a great use case because it's not like always people are not always using it you know usually when you have to go to a store so it fits perfectly for serverless scenarios so what did we do we basically ported the the existing code which is a big java application reported that just the logic required the business logic into golang functions and because we were building it on top of the server let's add on we didn't really have to worry about you know writing
the the creating the metrics you know sending the the metrics to our time series database and whatnot we didn't have to worry about log collection we didn't have to worry about scale scaling order scaling and our devs just had to focus on just bringing the business logic over well we finished this project in literally as a two-week sprint which is kind of cool getting getting a little bit so as you can see from this slide we basically took the business logic code and split it into two functions one is the indexing piece indexer function what
that does is it takes the store data and puts it into an elastic search index because the the the the service add-on is built on top of kubernetes we were able to deploy a production grade elastic search cluster into our same infrastructure with with a helmet art so that's pretty easy and then the other function you see here is the query function which takes the request from the app and executes geo distance search queries against the elastic search and shows you the results so pretty pretty straightforward now I want to talk a little bit about
where we're going with it so today we have as I said earlier we have store data in different sources we want to actually kind of merge create a basically a bigquery data set and have all of the store data in central location and then when when the store data changes based on those events we want the index to be automatically updated so that our customers are always seeing the most current data and then t-mobile's pretty happy in to supportive of open source so we want to open source the store data set so that other other
people are other developers in the industry can actually build something that we didn't imagine right experience so we run open source the data set we also want to open source the function code because the store locator is something there pretty much every retailer has the same kind of leader if you look at Best Buy or or different applications the the store locator is something we see everywhere so we want to open source that as well and then the last bit here is you know we're always looking for ways to improve the experience for our customer
make it easy so we want to integrate with Google assistance so that it gives a natural language kind of interaction with the experience I think that's about it [Applause] thanks.thanks wrong it's been a pleasure working with you and the t-mobile team and we really appreciate your passion around the service technology that we're building here and where we're gonna go in the future and I can't wait to see where you guys take this stuff to so that's the gku service add-on like I said it's available in an alpha program right now if you would like to
request access to that please check out this URL it'll be available for everybody later this year but you know there's a lot of time left in this talk huh t-mobile talked a lot about open source - that might be a good idea what do you think maybe we should talk a little bit more about that all right so thank you Ryan thank you ROM so as we were building the serverless add-on for gke and working with customers like t-mobile we had the opportunity to survey this space and we realized that the marriage of serverless and
kubernetes is attracting a lot of interest and rather than shy away from that we saw it as a perfect opportunity to bring everybody together to collaborate that's what we're going to talk about now I'm very pleased to introduce today Kay native which provides the building blocks for running serverless workloads on kubernetes and what I'm gonna show you should look familiar because Brian just showed it to you in the demo this is real technology that we're deploying into our real clusters and supporting and we believe that's the right way to approach an open source problem but
you don't do it alone doing this right requires an ecosystem and we are very proud to have worked with some of the biggest and leading names in cloud like pivotal an S AP and Red Hat and IBM to codify the commonalities for running serverless on kubernetes and we're gonna hear from some of them today before I get started though I want to I sort of show you the landscape and show you where in the overall stack we think the K native fits it so we start first with kubernetes the universal platform it's ubiquitous it runs
everywhere we take that for granted and that's what allows us to do this which is build a layer of building blocks these primitive components four essential operations for running server list workloads on top of kubernetes we're starting with build and serving and inventing I but this is just the beginning we expect it to grow from here in fact I've heard this referred to as the standard library for running serverless on kubernetes and I think I'm okay with that analogy in here we see the type of products that we envision being built on top of Kay
native Google is sharing a few of those with the world today earlier today we learned about s IP schema pivotal function service and we'll talk about those now again I can't emphasize enough so I added a whole nother slide just to say it this is not a science fair project this is not a 20% open-source project that we're shipping rather it's Google products that we are taking the open source out of and giving giving away and we're not doing it alone earlier today s AP announced s AP kheema which is based on k native serverless
components and they say that K native will further enable si P to accelerate enterprise server less and event based applications si P has been an incredible development partner throughout and I'm really excited about what's going on with kima I they're around this week at the SA P booth and I hope that everybody gets a chance to stop by and see what we're doing with kima an IBM cloud is investing in K native IBM of course has extensive background and experience running serverless workloads at scale with open whisk and then we are working together and they
are leading and bring their leadership and expertise to help bring server lists to a wider range of applications developers and Industry than ever before and Red Hat who's been focused on open source and customer choice since the beginning is investing in K native to create a common building block for serverless on top of kubernetes across the hybrid cloud absolutely wonderful working with Red Hat very excited to see this in the open shift function service and t-mobile isn't just a customer t-mobile is also a contributor to K native with their jazz framework they are integrating K
native into the very fabric the T Mobile runs on K native and kubernetes and jazz are working beautifully together to accelerate t-mobile's move to cloud native applications been fantastic working with a partner and a customer who has so much to add in the open source world as well thank you Ron in fact this is the beginning maybe the future of serverless is going to be built on kubernetes and K native some of you know sebastian creator of KU bliss he says the k native is standardizing the primitives needed to simplify code to production workflows it
promises to be a solid foundation to develop the future of serverless computer it's been amazing having some help out so thank you and you've heard me talk a lot now I'd like to bring up my friend Ryan Morgan vice president of Application Platform at pivotal software who has done more for any 4k native than anybody Thank You Ryan thank you do it it's been fantastic working with you and the team and I'm excited to share with all with you our experience around K native and where we see it going but first why service think judging
by the number of hands I saw earlier maybe I don't need to explain this but at pivotal our mission is to transform the way the world build software part of that is being able to deliver the right compute abstractions on the cloud of your choice for your workload the top thing on there the four functions B view that is a really important enterprise news case and it's an area that we've been invested in fairly heavily and that's a problem that we want to solve for our customers today then why Cain ativ about a year ago
pivotal partnered with VMware and Google to bring pivotal container service to market pebble container service was a enterprise-grade kubernetes distribution that's deployable to vSphere and has the promise of constant compatibility with gke we view kubernetes is a great platform for building platforms and from that we started project rift project griffith had the top goals of being kate's native with a focus on event streaming and first-class support for spring and for java but also for other languages as well one thing we've seen is when you reduce the scope of concern for developers it allows them to
be more productive as the stewards of the Spring Framework we've seen this with spring boot and functions just takes that one step further but as we were developing riff we quickly realized that there are some missing pieces there's a lot about container builds things about incremental deploys auto scaling and things that we weren't concerned with at the beginning and at this point we were introduced to do it and his team working on K native and we realized that riff and Canada were very complementary the things that we were working on really fit in well with
with the plans they had for K native so again working with Google to contribute some of our code into the K native project and you know through that we then rebased the riff developer experience for event-driven functions onto creative so really we're betting our future on Kenneth and part of that is our first release today so this morning at 9:30 we released our first version of riff that is fully built on top of K native I also wanted to go into a few of the the contributions that we've made today the first is around the
riff build and the build templates that are provided within Canada through our experience in cloud foundry we have a lot of experience of build box build packs offer the ability to just push your code and from that generate in the image sets in a runnable build packs are very easy to write and there's been quite a bit of success with Bill talks in the past world we're very excited to see what that might bring to K native the second is around riff and the and the topics and the eventing that we were working on there
and the contribution of that into the channel abstraction within canada's what that does is it provides not only functions but also applications with the ability to publish and receive streams of events for event based functions if you follow pivotal you might hear us talk about the value line the value line is the point in your architecture where changes below that line are not delivering value to your end customers we feel that that concept applies and open source as well and in this case the building blocks that Canada provide on top of kubernetes allows us developers
to worry less about infrastructure and focus more on your cut so for pivotal this all means this really comes down to a new product called pivotal function service this will be deployable and runnable on typical container service and will provide a common function abstraction across any cloud or on-premise it will have strong support for Java and for spring but also will support other languages as well and like all pivotal products we'll have a strong focus on b2 and most importantly Israel built upon components that are built within K native so this is all very new
this is obviously day one but we're very excited about the future of Canada and look forward to working with you all in the community thank you Thank You Ryan thank you I can honestly say that we would not be here without the help and support and contributions of pivotal it's been fantastic working with Ryan and the team I'm really excited about the future for riff I'm excited about the future 4k native based on the things we've learned in that collaboration really been fantastic so Thank You Ryan now I'm going to tell you a very high
level so this is very low fidelity a little bit about what is in K native today this is early on and we're just getting started and I'm not going to go into a lot of detail because tomorrow there's a technical talk dev 209 where the tech leads will go in unpack this a little bit more so right now it focuses on the common components that everybody needs below the value line where you want to codify the commonalities where you're not deriving proprietary business logic but rather something that's better off if we all get it right
ok so right now we're starting with build and serving and events so what does build provide as you know containers are the lingua franca the common language of kubernetes and they're the common language of Kay native as well but if you're a developer like me you don't write containers you write code so you need something that's going to allow you to ork straight the process of going from code to container and what k native build primitives do is allow you to describe that in such a way that may be safely on cluster with Conoco or
in the cloud with Google cloud build you can have your kubernetes workflow pull that code into a container but when you have a container what do you do with that you want to serve it so what K native serving components do its allow you to describe your workload in a way that makes sense for the developer you saw some of these in Ryan's demo the route and the configuration the sort of 12 factor app style of development that we've learned building server list products at Google and that will allow the system to do things for
you like scale it from zero to one or one to a hundred thousand or maybe even better 100,000 back down to zero again right so that's the type of things that having these standard primitives allow you to do and that's great makes a wonderful day one experience but what about day to day five day 100 when you're still trying to run these workloads in real production then you need something that's smart about Bluegreen deployments and traffic shifting and those are the other components that are inside native serving and you might want to serve these over
HTTP or HTTPS that's great you can do that set up an ingress rule set up a route but you may also want to attach it to a larger event ecosystem you may want to attach it to a git commit or something being flowed off your CI sed pipeline and what the K native eventing primitives do is allow you to describe in a developer friendly way the binding from event producer to event consumer and provide the underlying infrastructure to make that happen so I showed you what's in the box let's talk a little bit about why
I made those choices and how it got in the box so there's some core principles that the development team uses the first is it should be idiomatic and by that I mean I let's take for example kubernetes if kubernetes has a way of doing things if it has a standard native way we'll we'll do it that way we won't reinvent the wheel with K native it should layer naturally on top you saw the Kubb cuddle demo where the exact same command line works against K native as well and also we shouldn't have you get a
pH D and K native to use it we should meet you where you are if you're a node developer who writes docker files we should be able to take your existing idiomatic node code and your existing idiomatic docker files and run them and serve them with all the advantages that you would get second it should be extensible I showed you these layers of loose coupling build and serving and events I and you should be able to in your products they can choose among them you may want build you may want all three but it also
should be pluggable at the bottom we saw a demo where it was wired into stack driver you should be able to pull that out and put in an old stack do whatever you need so we should make it extensible at the top and pluggable at the bottom and then lastly and I sort of struggled to find the right word for this but it should feel organic and by that I really mean I we should start from the bottoms up and codify the commonalities the stuff that we all agree on that need to be there we
shouldn't be inventing new things that feel rough we should do the things that you want in order to have a stable platform for the developers in your org so let's talk about those developers many of us in this room are that persona we write code all day and that's what we should be doing for our businesses these api's are designed for developers they're designed to make a developer's life easy they're designed to provide a server less experience on top of kubernetes but how do they get that system well operators operators provide these systems to developers
in fact you met an operator earlier today Ryan who's operating the serverless add-on for gke and your organization may have teams of operators who are deploying these things free for your businesses and maybe all of us hopefully all of us we'll also be contributors this is something that we all want to work on together to get it right and we strongly believe in the power of open source to do this so why is Google investing in it why are we doing it it's clearly not easy frankly we're doing it because you asked us to you're
moving your businesses to kubernetes but you don't want to leave serverless behind you want to bring serverless with you so Google is responding by helping out by building serverless on top of kubernetes then you ask why are we doing it in open source well you also want workload portability you want the freedom of choice to run wherever you want in your data center or in ours and we want to help provide that through open source so to recap key native is providing those building blocks for running server lists on kubernetes and today we're starting with
build and serving and events and it's very early days and we expect it to grow rather big from here and we're doing this so you can run wherever you want to run wherever you can run kubernetes we think you should be able to run your server less workloads and we're very pleased be doing this not on our own but with a very broad community of industry partners and best of all and I've been waiting to say this for a really long time Kay native is open today you can go to github I put this slide
last so you wouldn't all go at the beginning of the talk but go to github comm /k native and get involved we're really excited to open it today so thank you thanks to it oh thanks to it so that was amazing I can't wait to get started with Kenya oh wait if you want to get started using this technology today please go out and sign up for early access to the surplus add-on I will be sending out invites when I get back from them next and we have everything ready to go and of course you
can go learn more about K native today at that URL and go check out the github repository and you can install right to any kubernetes cluster in fact somebody submitted a pull request in the last four hours that shows how to do yet another install flow under kubernetes so super exciting everybody's already getting involved and of course this isn't the only talk about this while we're here please check out tomorrow especially at 4:35 the dev 209 talk with Vela and Mark who are hanging out back there to learn all the technical details well some more
of the technical details behind Kay native as well as a number of other service talks about where Google is going with serverless today [Music]